Search Results

Search found 25488 results on 1020 pages for 'prndl development studios'.

Page 675/1020 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Windows Clients: Windows or Linux Domain Controller?

    - by Ramon Marco Navarro
    I'm planning to set up a domain controller for our small computer laboratory. I'm a little confused as to what operating system to use for our domain controller. What's in the lab: The lab has 25 units running a mix of Windows 7 and Windows XP. The domain controller will only have 2GB of RAM running a C2D E7200. (Is this enough?) What we want: The Domain Controller will also be running a git server. The Domain Controller will also be used as a general development machine (mostly Java, PHP). A way to centralize the updates for the windows clients, so that they won't have to download the same patches from the remote site. The machines would just query them from the local domain controller and get the updates from there. Our head recommended that I virtualize a Windows Server 2008 system under a Linux host and use the former as a domain controller and the latter for development or the other way around. A comparison of the advantages and disadvantages of using a Linux distribution or Windows Server 2008 in this situation would also be appreciated. As you may have noticed by now, I'm kinda new to setting up a domain so I hope you guys will be able to help me. Thank you.

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • what does the asterisk mean after a filename if you do ls -l

    - by James.Elsey
    I've done an ls -l inside a directory, and my files are displaying like this : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ll total 9512 drwxr-xr-x 3 james james 4096 2010-05-07 19:48 ./ drwxr-xr-x 6 james james 4096 2010-08-21 20:43 ../ -rwxr-xr-x 1 james james 341773 2010-05-07 19:47 adb* -rwxr-xr-x 1 james james 3636 2010-05-07 19:47 android* -rwxr-xr-x 1 james james 2382 2010-05-07 19:47 apkbuilder* -rwxr-xr-x 1 james james 3265 2010-05-07 19:47 ddms* -rwxr-xr-x 1 james james 89032 2010-05-07 19:47 dmtracedump* -rwxr-xr-x 1 james james 1940 2010-05-07 19:47 draw9patch* -rwxr-xr-x 1 james james 6886136 2010-05-07 19:47 emulator* -rwxr-xr-x 1 james james 478199 2010-05-07 19:47 etc1tool* -rwxr-xr-x 1 james james 1987 2010-05-07 19:47 hierarchyviewer* -rwxr-xr-x 1 james james 23044 2010-05-07 19:47 hprof-conv* -rwxr-xr-x 1 james james 1939 2010-05-07 19:47 layoutopt* drwxr-xr-x 4 james james 4096 2010-05-07 19:48 lib/ -rwxr-xr-x 1 james james 16550 2010-05-07 19:47 mksdcard* -rw-r--r-- 1 james james 205851 2010-05-07 19:48 NOTICE.txt -rw-r--r-- 1 james james 33 2010-05-07 19:47 source.properties -rwxr-xr-x 1 james james 1447936 2010-05-07 19:47 sqlite3* -rwxr-xr-x 1 james james 3044 2010-05-07 19:47 traceview* -rwxr-xr-x 1 james james 187965 2010-05-07 19:47 zipalign* What does that asterisk mean? I'm also unable to run a particular file, as follows : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ./emulator bash: ./emulator: No such file or directory EDIT : I'm trying to get Eclipse to use emulator, but it keeps complaining the files does not exist, yet it is here?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • SQL Server 2008 Express + Reporting Services on Windows 7

    - by TimothyP
    I'm trying to install SQL Server Express 2008 and Reporting Services on a x64 Windows 7 Machine for development purposes. I've installed SQL Server 2008 Express with the Microsoft Web Platform Installer I had to manually enable the SQL Server Browser in the Sql Server Configuration Manager and tried to enable the SQL Server Agent but that simply doesn't work. Keeps throwing an RPC error: "The remote procedure call failed. [0x800706be]". The start mode is set to Disabled and I cannot change it. Even though I selected the SQL Server Express with advanced services in the web platform installer I could not find any reference to SQL Server Reporting Services so I used the SQL Server Installation Center x64 application to "upgrade" to SQL Server Express 2008 with advanced services... this installed many things but still I couldn't find any reference to SQL Server Reporting Services other than an application called: "Reporting Services Configuration Manager" This opens up a dialog called "Reporting Services Configuration Connection" which is asking for a server name (shows the name of my machine) and a Find button. When I click the find button I get: "Unable to connect to the Reporting Server WMI provider. Details: Invalid Namespace". I found some references on the web to solve this problem, but they refer to a directory: "%ProgamFiles%\Microsoft SQL Server\MSRS10.SQL2008\Reporting Services\" which does not exist anywhere on my system. (The directories for SQL Server are there, but there is no Reporting Services directory anywhere). What am I doing wrong here? Wasn't the web platform installer supposed to handle all this? Thnx for any advice. PS: Most google results refer to 2005 vs 2008 problems, but I never had 2005 installed on this system, it's a newly installed development machine.

    Read the article

  • Installing CDT on top of JDT: Conflicting Dependency

    - by someguy
    I am trying to install the CDT plugin on top my existing version of Eclipse, which was for Java. The problem is that I got this error message when I tried doing so via "Install New Software...": Cannot complete the install because of a conflicting dependency. Software being installed: Eclipse C/C++ Development Tools 4.0.3.200802251018 (org.eclipse.cdt.feature.group 4.0.3.200802251018) Software currently installed: Eclipse IDE for Java Developers 1.3.1.20100916-1202 (epp.package.java 1.3.1.20100916-1202) Only one of the following can be installed at once: International Components for Unicode for Java (ICU4J) 4.2.1.v20100412 (com.ibm.icu 4.2.1.v20100412) com.ibm.icu 3.6.1.v20070906 Cannot satisfy dependency: From: Eclipse IDE for Java Developers 1.3.1.20100916-1202 (epp.package.java 1.3.1.20100916-1202) To: org.eclipse.epp.package.java.feature.feature.group [1.3.1.20100916-1202] Cannot satisfy dependency: From: Eclipse C/C++ Development Tools 4.0.3.200802251018 (org.eclipse.cdt.feature.group 4.0.3.200802251018) To: com.ibm.icu [3.4.0,4.0.0) Cannot satisfy dependency: From: EPP Java Package 1.3.1.20100916-1202 (org.eclipse.epp.package.java.feature.feature.group 1.3.1.20100916-1202) To: org.eclipse.rcp.feature.group 3.6.0 Cannot satisfy dependency: From: Eclipse RCP 3.6.0.v20100519-9OArFKvFtsd7WLUKh-DcYTS (org.eclipse.rcp.feature.group 3.6.0.v20100519-9OArFKvFtsd7WLUKh-DcYTS) To: com.ibm.icu [4.2.1.v20100412] Cannot satisfy dependency: From: Eclipse RCP 3.6.1.r361_v20100827-9OArFLdFjY-ThSQXmKvKz0_T (org.eclipse.rcp.feature.group 3.6.1.r361_v20100827-9OArFLdFjY-ThSQXmKvKz0_T) To: com.ibm.icu [4.2.1.v20100412] What can I do to solve this?

    Read the article

  • Windows Server 2008 R2 SP1 Drivers Missing Would Not Install?

    - by bleepzter
    I am building my dev machine with WS2008R2SP1. I consider Windows Server as the best development environment for C# since I tend to focus on server-centric (integration) applications. The desktop flavor of Windows (7) doesn't play nicely with things like Commerce Server or BizTalk... so I to like stick to the environment I develop for. Previously I used to develop inside of VM's but I've found that it is super inefficient and tends to take a toll on the laptops. (I've gone through two of them in 6 months). Problem is that I multiple devices that do not want to be recognized by Windows: My machine is Dell Precision M4500: Intel Core i7-Q740, 1TB HDD, 8GB RAM, Dell re-branded Broadcom DW1501 802x11n Half-Mini Card, Dell re-branded Broadcom DW375 Bluetooh Module, Intel 82577LM Gigabit Network connection NVidia Quadro FX1800 Graphics The devices in question are the Dell rebranded broadcom network and bluetooth adapters: Broadcom USH: USB\VID_0A5C&PID_5800&REV_0101&MI_00 USB\VID_0A5C&PID_5800&MI_00 DW375 Bluetooth Module USB\VID_413C&PID_8187&REV_0517 USB\VID_413C&PID_8187 When I ran the broadcom installers I get "Operating System not supported" which I think is a big oversight on Broadcom's part. Why check for system version string? UGHGHGH Moreover if I try to manually force the driver in windows... I get an error: Driver Management concluded the process to install driver FileRepository\btwampsecfl.inf_amd64_neutral_d8fc2b85d035ed47\btwampsecfl.inf for Device Instance ID USB\VID_0A5C&PID_5800&MI_00\7&66DE6C9&0&0000 with the following status: 0xe0000217. '- or - Driver Management concluded the process to install driver FileRepository\btwampfl.inf_amd64_neutral_d4c4acf036c61299\btwampfl.inf for Device Instance ID USB\VID_413C&PID_8187\90004EEEF5A6 with the following status: 0xe0000217. I googled the 0xe0000217 error code and it says Bad service install section in the driver inf file... Any ideas on how to fix this? I also tried the post by BetaIQ on MSDN Forum, unfortunately the links to the driver package included in the post were dead :( PS. On a side note I also do mobile development for Android, iOS, and Windows Phone, and BB. Having the bluetooth is quite useful with mobile devices.

    Read the article

  • How can I cache a Subversion password on a server, without storing it in unencrypted form?

    - by Zilk
    My Subversion server only provides access via HTTPS; support for svn+ssh has been dropped because we wanted to avoid creating system users on that machine just for SVN access. Now I'm trying to provide a way for users to cache their passwords for a while, without leaving them stored on the filesystem in unencrypted form. This is no problem for Gnome or KDE users, because they can use gnome-keyring and kwallet, respectively. IIRC, TortoiseSVN has a similar caching mechanism, too. But what about users on a non-GUI system? Some context: in this case, we have a development/testing server where one project has been checked out into the Apache htdocs directory. Development for this project is almost complete, and only minor text/layout changes are performed directly on this server. Nevertheless, the changes should be checked into the repository. There's no kwallet and no gnome-keyring on this system, and the ssh-agent can't help because the repository is accessed via https instead of svn+ssh. As far as I know, that leaves them the choice of entering the password every time they talk to the SVN server, or storing it in an insecure way. Is there any way to get something like what gnome-keyring and kwallet provide in a non-GUI environment?

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • Huh? JDK not found? (on Windows 7 64-bit)

    - by Android Eve
    I am setting up a development environment for the latest Android 2.3 on a fresh install of Windows 7 64-bit. I first installed the 64-bit JDK 6 (jdk-6u23-windows-x64.exe). Then, I installed 64-bit Eclipse Classic 3.6 (eclipse-SDK-3.6.1-win32-x86_64.zip). Then, I proceed to install the Android SDK Starter Package: installer_r08-windows.exe. But... upon start it says: "Java SE Development Kit (JDK) not found." Why? I just installed it. Is this a mismatch between 32-bit and 64-bit? How do I solve this? Update (1): I tried setting the %JAVA_HOME% environment variable, as well as setting the Installed JREs in Eclipse, as suggested below. None of these solved the problem. It appears that I am not the only experiencing the problem, as this thread suggests: http://stackoverflow.com/questions/1919340/android-sdk-setup-under-windows-7-pro-64-bit I wonder whether there is a 64-bit version of the Android SDK. Update (2): I used the zip version instead (android-sdk_r08-windows.zip), ran android.bat, updated all SDK packages, and installed the ADT plugin (8.0.1), not before having to check: 'Contact all update sites during install to find required software'. We'll see how this goes... Update (3): It worked! (going to accept @bubu's answer shortly) -- but why doesn't the emulator include the HelloAndroid app when I run it (Ctrl+F11) from Eclipse?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • OS X Client & Ubuntu Server - Best way for client to access files on server?

    - by Camsoft
    I've got a local development web server running Ubuntu. I also have an iMac running OS X 10.6 which I use a client and is my development machine. I'm currently have Samba server installed on my Ubuntu server. I have shares setup for all the website directories. I then use my Mac and Coda to edit the files via their shares. This generally works really well but I noticed that my Mac was writing loads of resource fork ._filename files everywhere. I found out the following about the files: These files are created on volumes that don't natively support full HFS file characteristics (e.g. ufs volumes, Windows fileshares, etc). When a Mac file is copied to such a volume, its data fork is stored under the file's regular name, and the additional HFS information (resource fork, type & creator codes, etc) is stored in a second file (in AppleDouble format), with a name that starts with "._". (These files are, of course, invisible as far as OS-X is concerned, but not to other OS's; this can sometimes be annoying...) Does anyone know of a way of sharing files between a Mac client and a Linux server that is most compantable between the two operation systems? Ideally it needs to support the HFS filesystem so that the resource forks are not created and it also needs to support the permissions between server and client.

    Read the article

  • Referencing SQL Server 2008 R2 SMO from Visual Studio 2010

    - by user69508
    Hello. We read a number of things about referencing SQL Server SMO from Visual Studio but still don't have the definite answers we need. So, here it goes... A number of years ago we created a C# application using Visual Studio 2005 and SQL Server 2005. In that application, we added .NET references to a number of SQL Server SMO objects, and everything worked fine. Those references were: Microsoft.SqlServer.ConnectionInfo GAC 9.0.242.0 Microsoft.SqlServer.Smo GAC 9.0.242.0 Microsoft.SqlServer.SqlEnum GAC 9.0.242.0 We have now migrated to Visual Studio 2010 and SQL Server 2008 R2. However, when we try to reference those same SMO objects for SQL Server 2008 R2, they don't appear in the .NET references tab. We're wanting to reference the SQL Server 2008 R2 version of those same SMO assemblies for our upgraded C# application. On our development machines, we have SQL Server 2008 R2 Developer installed with all options, including the SDK such that the assemblies are found in C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies. So, my first questions are: Are we supposed to do file references to the SMO assemblies instead of .NET references in Visual Studio 2010 w/ SQL Server 2008 R2? Or, is there some problem with our development machines such that the SMO assemblies are not appearing in the .NET references tab? Next, our production machines will have SQL Server 2008 R2 Workgroup installed with the client tools option selected, thus providing those same SMO assemblies in C:\Program Files (x86)\Microsoft SQL Server\100\SDK\Assemblies. So, the next questions are: When we release to production, are we supposed to redistribute the SMO assemblies with our application? Or, will our application work on the production servers without redistributing the SMO assemblies (since the client tools/SMO assemblies have been installed)? What else????? Thanks for the help!

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • Identifying Hard Drive as performance bottleneck for desktop machines

    - by Programming Hero
    I'm working in a development team where we all use laptops so we can work in multiple locations. These laptops are proving notoriously slow for development work, but at a glance they all look to have the specification for a much faster experience: CPU - Intel Core 2 Duo T7500 Memory - 2GB of RAM We all experience the biggest delays when the hard-drives are being accessed, particularly when swap-files are being thrashed. After doing a little bit of profile, a colleague discovered that our HDDs are seeing Read/Write speeds of about 10MB/sec. This seems abnormally low and we believe it the cause of the problem. Sensibly (though somewhat annoyingly) our business wont blow money on faster drives just to see if it fixes the problem; we need to illustrate this is definitely the problem and that buying some solid-state drives will make it go away. I need some way of showing how 90% of the system resources aren't being used over the course of a day, and that whenever there is utilization, it's all in HDD reads or writes. Are there any tools I could use to provide this information? Does it seem likely the problem is going to be fixed by a faster drive? Should I be looking for alternatives?

    Read the article

  • MS licensing of multiple RDP sessions for non-MS products in Windows XP Pro

    - by vgv8
    Question 1) and 2) were moved into separate thread Which Windows remote connections bypass LSA? and what r definitions of login vs. logon session? 3) Do I understand correctly that multiple remote RDP sessions are supported by Windows XP but require additional (or modified) licensing? Which one? Or it is always illegal to run multiple RDP sessions on Windows XP? even through non-MS commercial software? ---------- Update1: I already understood my error - the main questions were about definitions (important to find the common language with others) and the licensing questions were collateral - but it was already answered. I shall try to separate these questions leaving here the questions about RDp licensing and migrating other questions into separate thread ---------- Update2: Trying to "work around" licensing terms is pointless and wasteful of time I never try "working around" and I never ask anything like this, I am not specialist in licensing. My clients/employers provide me with tools and licensing support. They have corporate lawyers, planning/accounting/purchase departments for these issues. The questions that I ask is the matter of scalability and efficiency (saving my and others time) in my developing work. For ex., Just because I need autentication against Windows AD it is time-saving to use ADAM instead of deploying full-fledged AD with DC + servers + whatever else? Nobody is forcing you to use Windows XP I shall not rush into re-installing all my operating systems on all my development machines (at home, at client premises) just because a few guys have a lot of fun downvoting development-related questions in serverfault.com. If I do so, I make a joker from me in the eyes of my clolleagues et al Update: I unmarked this question as answered since it had not even adressed the question, at least mine. Should I understand that Terminal Server PRO, allowing Windows® XP and Windows® Small Business Server 2003 to host multiple remote desktop sessions, is illegal? Related: My answer to question Has windows XP support multiple remote login session (RDP) at a time?

    Read the article

  • How to point a subdomain to local server with dynamic IP

    - by jlego
    I see there are many related questions to this one, however the answers given seem to be a little vague for a novice like me. I've got a dedicated LAMP stack running Fedora 16 locally on my home network. Everything works fine internally. I can access the Apache server from other machines on the network using the internal IP in a browser. I'm using the stack for a local file server as well as a development environment for websites. There are a couple of reasons why I would like the development sites hosted on the machine to be available publicly. 1.) I use a CMS that has paid add-ons which allows you to assign the paid license to a domain. I can't develop with paid add-ons on the closed dev server. 2.) I would occasionally like for clients to be able to view the site dev at late stages before it goes live. I have a domain (foo.com, and I want to point a *sub*domain (dev.foo.com) to the local server. I know this is best accomplished with a Static IP, however my IP from my ISP is Dynamic and I don't think there is any way to change that. From what I have read, services like ZoneEdit & DynDNS are supposed to be able to accomplish this, but I have tried both and found it very confusing. Also the server is behind a router and I have also read that you need to set up DDNS(?) in your router, that many routers have presets for these services, and I've found that DynDNS is the only one my router seems to support.

    Read the article

  • Using proxy.pac to access Apache 2 with a hostname?

    - by leeand00
    Note that I do not have a DNS on my network, and that is why I am resorting to using a proxy.pac file. I would like to be able to access my development Apache 2 server using a name instead of an ip without setting up a full blown DNS. I am aware of setting names in the C:\Windows\System32\drivers\etc\hosts file and the /etc/hosts files, however I cannot edit the hosts file on all of the devices that I am testing the site on. I've added a proxy.pac file to my Apache2 server and pointed my browsers settings to it at: http://192.168.2.221/proxyutils/proxy.pac ...where 192.168.2.221 is thehostname's ip address. I set the above URL in Firefox in the following manner: From the menubar selecting "Edit-Preferences" In the resulting "Firefox Preferences" window clicking the "Advanced" tab. Clicking the "Network" tab Clicking the "Settings" button. Selecting the "Automatic proxy configuration URL:" radio button. Entering http://192.168.2.221/proxyutils/proxy.pac and pressing OK. The contents of the proxy.pac file on the Apache server function FindProxyForURL(url, host) { if( dnsDomainIs(host, "thehostname") ) return "PROXY 192.168.2.221:80"; return "DIRECT"; } In Firefox I then access the following URL: http://thehostname/wp-blog/ And instead of the development version of the Wordpress blog I am trying to access I get a URL of http://thehostnamehttp/thehostname/wp-blog/ in my address bar and a 404 Not Found page in the browser window. Looking over proxy.pac, it seems like calling dnsDomainIs shouldn't work considering I don't have a DNS setup on my network, but I've also tried just comparing the host argument with the string "hostname" and it yielded the same result, even after modifying the proxy.pac file and clicking the reload button near the proxy settings. This could also be a Wordpress problem, since I've noticed that directories without Wordpress seem to function perfectly normally. (see cross post here) Is there any way I can modify my configuration so that I can access the site using http://thehostname/wp-blog/ ?

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >