Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 595/1086 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Proper procedure - sftp access to www folder - To be able to upload files

    - by Jay
    My www folder is root:root. What should it be? My site works perfectly but maybe I am doing something wrong. My nginx.conf says user is 'nginx'. Should I be changing the www onwership and group to that or something else? Mainly I want to be able to sftp into the www folder using FileZilla. Preferably only allow access to the www folder. I want to be able to upload the website files but I just don't know the proper procedure. I have tried changing owners and groups but I get worried some part of the stack will not like it. For example does nginx play along, and php? I thought about having a sftp group or even an sftp user. But I don't want to go down a path that should be avoided. What should I be doing with my setup?

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • A New Year’s Celebration in June

    - by Kristin Rose
    Happy Oracle New Year Everyone! Last week marked the official start to FY13 and we could not be more pleased with all that lies ahead this quarter, and all that we accomplished in the last…especially our newly updated Oracle PartnerNetwork (OPN) Solutions Catalog. If you thought it was great before, just wait until you see it now. We are ringing in our New Year right by fully equipping partners with the necessary tools they need to have another successful year. The Solutions Catalog will help draw attention to your partner services and offerings, highlighting your expertise. The Solutions Catalog is a centralized and easy way to navigate this customer friendly site. Some of the exciting advancements include: A streamlined search interface A robust lead capture tool that requests the contact information of potential customers A professional display of customer recommendations to showcase your skill set A partner dashboard with enhanced profile creation and an improved publication process Most exciting of all, updating your profile is easier than ever with the updated partner dashboard. Keeping your partner profile up to date will help to ensure customers are looking at the correct information about your company, and can easily stay on-top of any new developments or Specializations you receive. So don’t cut yourself short, be sure to update your profile today if you haven’t already done so. For more information on the exciting upgrades available to you, visit the ‘Resources for Partners’ page or watch Takane Aizeki, Principle Portal Manager at Oracle, walk through the upgraded Solutions Catalog and the different ways to showcase your value as an Oracle solution provider. Cheers,Lydia SmyersGroup Vice PresidentWWA&C and Communications

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Mandatory Profiles on a Server 2003 TS Box

    - by Chloe
    I have a Windows Server 2003 box which will be acting as a terminal server. It will actually be running Citrix, but I don't believe that to be relevant here. There has been a request for every user to use a single mandatory profile. I've used mandatory profiles before, but there have been generally different profiles for different users so I've always used the "Terminal Services Profile" tab to good effect. What I'd like this time is a single setting, such as a Group Policy or similar that simply forces every non-domain admin user logging on to the box into using the mandatory profile. We'll be using Folder Redirection to take care of everything else. I'm aware of the following GPO: Computer Policy\Computer Configuration\Administrative Templates\Windows Components/Terminal Services Set path for TS Roaming Profiles But, as that's a computer policy, will it not apply to all users including administrators? If so, is it possible to exclude admins somehow?

    Read the article

  • I can't uninstall Git

    - by Tom
    I was researching Git so I downloaded the Windows version to test it out on a repository on GitHub. After about 30 minutes I couldn't work out how to use it, so I decided I probably wouldn't need a distributed repository as our projects aren't that big and went back to what I know - SVN. (I thought) I uninstalled all the Git related stuff I'd put on my PC, but have now got an irritating problem where if I open any folders I get an error message saying: Hello [ERROR] Could not find git path As you can imagine, this is a real pain, does anyone have any ideas on how to fix it?

    Read the article

  • Announcement: DTrace for Oracle Linux General Availability

    - by Zeynep Koch
    Today we are announcing the general availability of DTrace for Oracle Linux. It is available to download from ULN for Oracle Linux Support customers.  DTrace is a comprehensive dynamic tracing framework that was initially developed for the Oracle Solaris operating system, and is now available to Oracle Linux customers. DTrace is designed to give operational insights that allow users to tune and troubleshoot the operating system. DTrace provides Oracle Linux developers with a tool to analyze performance, and increase observability into the systems they own to see how they work. DTrace enables higher quality applications development, reduced downtime, lower cost, and greater utilization of existing resources. Key benefits and features of DTrace on Oracle Linux include: • Designed to work on finding performance bottlenecks • Dynamically enables the kernel with a number of probe points, improving ability to service software • Enables maximum resource utilization and application performance • Fast and easy to use, even on complex systems with multiple layers of software If you already have Oracle Linux support, you can download DTrace from ULN channel. We have a dedicated Forum for DTrace on Oracle Linux, to discuss your experience and questions.

    Read the article

  • Building gstreamer_ndk_bundle problems

    - by Cipi
    I'm trying to build gstreamer_ndk_bundle under Ubuntu 12.4 and I'm failing miserably! I have installed all "glib-dev" packages (packages that in their name have glib and dev), and also I have tried to compile/install glib 2.33.1 (latest) from source, but I always get this error: /home/marko/gstreamer_ndk_bundle/jni/../glib/gobject/gmarshal.c:149: undefined reference to `g_value_get_schar' collect2: ld returned 1 exit status make: *** [/home/marko/gstreamer_ndk_bundle/obj/local/armeabi/libgobject-2.0.so] Error 1 This means that glib source doesn't have the definition for g_value_get_schar, and since that function was introduced in glib somewhere after version 2.30.0, my guess is that I am not using proper glib! I tried to force gstremaer_ndk_bundle to build with sources from the folder /home/marko/glib-2.33.1/ which I compiled/installed by exporting these env vars: GLIB_GENMARSHAL=/home/marko/glib-2.33.1/gobject/glib-genmarshal GLIB_COMPILE_SCHEMAS=/home/marko/glib-2.33.1/gio/glib-compile-schemas Also I changed gmarshal.h so it includes gmarshal.h from the installed glib folder: #ifndef _marko_glib_loaded #define _marko_glib_loaded #include "/home/marko/glib-2.33.1/gobject/gmarshal.h" #endif But failed in both cases. How can I know what glib is used while compiling gstreamer and install the proper one? How can I force gstreamer_ndk_bundle to use glib sources from the folder I have un-tared/configured/installed and not the system ones, or whatever ones it uses? I read somewhere that I need gstreamer-devel package if I keep getting this error while compiling. Where can I find that package?! Can't Google it out... Has anyone EVER built gstreamer_ndk_bundle and lived to tell the tale?

    Read the article

  • Troubleshooting VC++ DLL in VB.Net

    - by Jolyon
    I'm trying to make a solution in Visual Studio that consists of a VC++ DLL and a VB.Net application. To figure this out, I created a VC++ Class Library project, with the following code (I removed all the junk the wizard creates): mathfuncs.cpp: #include "MathFuncs.h" namespace MathFuncs { double MyMathFuncs::Add(double a, double b) { return a + b; } } mathfuncs.h: using namespace System; namespace MathFuncs { public ref class MyMathFuncs { public: static double Add(double a, double b); }; } This compiles quite happily. I can then add a VC++ console project to the solution, add a reference to the original project for this new project, and call it as follows: test.cpp: using namespace System; int main(array<System::String ^> ^args) { double a = 7.4; int b = 99; Console::WriteLine("a + b = {0}", MathFuncs::MyMathFuncs::Add(a, b)); return 0; } This works just fine, and will build to test.exe and mathsfuncs.dll. However, I want to use a VB.Net project to call the DLL. To do this, I add a VB.Net project to the solution, make it the startup project, and add a reference to the original project. Then, I attempt to use it as follows: MsgBox(MathFuncs.MyMathFuncs.Add(1, 2)) However, when I run this code, it gives me an error: "Could not load file or assembly 'MathFuncsAssembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format." Do I need to expose the method somehow? I'm using Visual Studio 2008 Professional.

    Read the article

  • Windows7 NFS with linux server

    - by Vitaly
    Hi. I have an Ubuntu server and want to access its web folder (/var/www). What I done: installed nfs-kernel-server, nfs-common and portmap (as in faq) Setted up /etc/exports: /var/www 192.168.1.0/255.255.255.0(rw,no_roow_squash,async,subtree_check) Then: sudo exportfs -ra Then: sudo /etc/init.d/nfs-kernle-server restart I checked, if all works on same machine: sudo 192.168.1.101:/var/www /mnt/test Then accessed /mnt/test and seen that all data present and all ok. Next, I tried to connect this folder to windows7 using NFS client: First, I checked, that linux exported path successfully: showmount -e 192.168.1.101 /var/www 192.168.1.0/255.255.255.0 All ok, go to mount: mount -o anon 192.168.1.101:/var/www z: Console said, that all success.. but. I cant access drive Z (drive exists in the system and point to right folder). When I try to access drive Z my Explorer just going to sleep and then say that timeout expired. Help me please.

    Read the article

  • setting up eclim to support php

    - by tipu
    i have the plugin pdt installed with my eclim using: DISPLAY=:1 ./eclipse/eclipse -nosplash -consolelog -debug \ -application org.eclipse.equinox.p2.director \ -repository http://download.eclipse.org/releases/helios \ -installIU org.eclipse.php.feature.group i compiled the thing using dargs for php: ant -Declipse.home=/home/tipu/downloads/eclipse -Dplugins=php but creating a project gives me: java.lang.IllegalArgumentException: Unable to find nature for alias 'php'. Supported aliases include: javascript=org.eclipse. wst.jsdt.core.jsNature, java=org.eclipse.jdt.core.javanature while executing command (port: 9091): -editor vim -command project_create -f "/home/tipu/phpproj2/" -n php thoughts on how to fix?

    Read the article

  • Oracle Linux Pavilion is Back for Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    By Zeynep Koch Back by popular demand, Oracle will again host the Oracle Linux Pavilion at Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. We look forward to seeing you at the pavilion.

    Read the article

  • Creating cookieless application on development machine with asp.net

    - by zaladane
    I am thinking about setting up a new domain to host static content on my website and have it cookieless just like Stackoverflow with their static domain. So before going ahead and buying the domain and setting it up I wanted to test it on my developement machine first under localhost (I have to mention that i am planning on having IIS running on my new domain for the static files). I therefore created a new application under IIS and disabled session state and forms authentication. When my main application needs resources like css, images and js , I use the path to the "static" application where they are hosted. The problem is that when I look at the request and the response for the requested files, they still have the session_id cookie defined as well as the asp.net authentication cookie. Is it at all possible to accomplish what i am trying to do on a development machine or do i have to just go ahead and purchase the new domain which hopefully with make things right? I tried to read about cookieless domain but can't figure out what i might be missing.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

  • Laptop wakes from sleep, once, due to audio controller (Windows 7)

    - by stijn
    The laptop is a recent Dell XPS 15z and the problem is as follows (reproducible about 90% of tries): put laptop to sleep using either Start-Sleep or closing the lid laptop goes to sleep after about 5 seconds, but instantly wakes again showing a black screen (touching the keyboard or moving the mouse shows the login screen one normally gets after wake) login again, put laptop to sleep latop stays in sleep mode output of powercfg -lastwake after the first instant wake shows the audio controller is responsible. Why would that be, why only the first try, and how to fix this? Wake History Count - 1 Wake History [0] Wake Source Count - 1 Wake Source [0] Type: Device Instance Path: PCI\VEN_8086&DEV_1C20&SUBSYS_04461028&REV_05\3&11583659&0&D8 Friendly Name: Description: High Definition Audio Controller Manufacturer: Microsoft

    Read the article

  • With Monit, how do I restart a process when a directory timestamp check fails?

    - by Alterscape
    In my /etc/monit/monitrc I have the following lines: check process foo_server with pidfile /var/run/bwam_server.pid start program = "/Users/foo/foo_server.sh start" stop program = "/Users/foo/foo_server.sh stop" check directory foo_data path "/Users/foo/Library/Application Support/foo_server/data" if timestamp > 1 minute then alert #if timestamp > 1 minute then restart foo_server I know I shouldn't have some of this stuff in my home directory, but this aside: if I uncomment the last line, Monit tells me syntax error on foo_server -- but I am, as far as I understand, correctly defining the process -- how else do I reference it?

    Read the article

  • Software Architecture and Design vs Psychology of HCI class

    - by Joey Green
    I have two classes to choose from and I'm wanting to get an opinion from the more experienced game devs which might be better for someone who wants to be an indie game dev. The first is a Software Architecture and Design course and the second is a course titled Psychology of HCI. I've previously have taken a Software Design course that was focused only on design patterns. I've also taken an Introduction to HCI course. Software Architecture and Design Description Topics include software architectures, methodologies, model representations, component-based design ,patterns,frameworks, CASE-based desgins, and case studies. Psychology of HCI Description Exploration of psychological factors that interact with computer interface usablilty. Interface design techniques and usability evaluation methods are emphasized. I know I would find both interesting, but my concern is really which one might be easier to pick up on my own. I know HCI is relevant to game dev, but am un-sure if the topics in the Software Architecture class would be more for big software projects that go beyond the scope of games. Also, I'm not able to take both because the overlap.

    Read the article

  • Can prefixing a dash reduce the search engine rating?

    - by LeoMaheo
    Hi anyone! If I prefix a dash to GUIDs in my URLs on my Web site, in this manner: example.com/some/folders/-35x2ne5r579n32/page-name Will my SEO rating be affected? Background: On my site, people can look up pages by GUID, and by path. For example, both example.com/forum/-3v32nirn32/eat-animals-without-friends and example.com/forum/eat-animals-without-friends could map to the same page. To indicate that 3v32nirn32 is a GUID and not a page name, I thought I could prefix a - and then my webapp would understand. But I wouldn't want my search engine rating to drop. And prefixing a dash in this manner seems weird, so perhaps Googlebot lowers my rating. Hence my question: Do you know if my search engine rating might drop? (Today or in the future?) (I could also e.g. prefix id-, so the URL becomes example.com/forum/id-3v32nirn32, but then people cannot create pages that start with the word "id".) (I think I don't want URLs like this one: example.com/id/some-guid.) Kind regards, Magnus

    Read the article

  • throw new exception- C#

    - by Jalpesh P. Vadgama
    This post will be in response to my older post throw vs. throw(ex) best practice and difference- c# comment that I should include throw new exception. What’s wrong with throw new exception: Throw new exception is even worse, It will create a new exception and will erase all the earlier exception data. So it will erase stack trace also.Please go through following code. It’s same earlier post the only difference is throw new exception.   using System; namespace Oops { class Program { static void Main(string[] args) { try { DevideByZero(10); } catch (Exception exception) { throw new Exception (string.Format( "Brand new Exception-Old Message:{0}", exception.Message)); } } public static void DevideByZero(int i) { int j = 0; int k = i/j; Console.WriteLine(k); } } } Now once you run this example. You will get following output as expected. Hope you like it. Stay tuned for more..

    Read the article

  • Ubuntu issues when moving hard disk to new system

    - by Tim
    I'm working on a legacy project with a small single board computer running Ubuntu 10.04 on a compact flash card. I need to be able to save away a working image (via dd) and copy said image to other compact flash cards for use in other single board computers (with identical hardware) I'm able to copy the image to other flash cards and bootup on other systems no problem. But I'm seeing strange behavior. For instance, I can't use sudo on the new system (“sudo: must be setuid root”). I've gone down the path of trying to fix this, but have run into a slew of other issues. General question is: what do I need to be aware of when moving a hard disk containing Ubuntu (in my case a compact flash card) to another computer? I was hoping it would be seamless to Ubuntu since it's moving to a system with identical hardware. Is there something that needs to be done to make it "portable"?

    Read the article

  • Issue with multiplayer interpolation

    - by Ben Cracknell
    In a fast-paced multiplayer game I'm working on, there is an issue with the interpolation algorithm. You can see it clearly in the image below. Cyan: Local position when a packet is received Red: Position received from packet (goal) Blue: Line from local position to goal when packet is received Black: Local position every frame As you can see, the local position seems to oscillate around the goals instead of moving between them smoothly. Here is the code: // local transform position when the last packet arrived. Will lerp from here to the goal private Vector3 positionAtLastPacket; // location received from last packet private Vector3 goal; // time since the last packet arrived private float currentTime; // estimated time to reach goal (also the expected time of the next packet) private float timeToReachGoal; private void PacketReceived(Vector3 position, float timeBetweenPackets) { positionAtLastPacket = transform.position; goal = position; timeToReachGoal = timeBetweenPackets; currentTime = 0; Debug.DrawRay(transform.position, Vector3.up, Color.cyan, 5); // current local position Debug.DrawLine(transform.position, goal, Color.blue, 5); // path to goal Debug.DrawRay(goal, Vector3.up, Color.red, 5); // received goal position } private void FrameUpdate() { currentTime += Time.deltaTime; float delta = currentTime/timeToReachGoal; transform.position = FreeLerp(positionAtLastPacket, goal, currentTime / timeToReachGoal); // current local position Debug.DrawRay(transform.position, Vector3.up * 0.5f, Color.black, 5); } /// <summary> /// Lerp without being locked to 0-1 /// </summary> Vector3 FreeLerp(Vector3 from, Vector3 to, float t) { return from + (to - from) * t; } Any idea about what's going on?

    Read the article

  • IIS 7 - Provisioning portal

    - by Doug
    I am wanting to setup our production IIS environments with a provisioning portal to ensure that deployment staff always setup sites in a uniform configuration, and that they don't actually have remote access to the servers directly. What is the best 'simple' provisioning tool for such a purpose? Do people write their own using something like Powershell remoting? I don't want to install a tool like HELM or similar as it feels like it creates unnecessary bloat on top of a production environment. features should include: create new website and app pool combo restart, start and stop application pools change bindings on websites

    Read the article

  • Apache, Django with mod_wsgi, and large request buffering

    - by Mukul
    In my setup of Apache 2.2 MPM worker and Django 1.3 with mod_wsgi 2.8, I need to support large POST request payloads. The problem is that when there are many such simultaneous requests, Apache uses up all the memory in the system and then crashes. It seems that Apache is buffering the requests completely in memory before executing the WSGI handler and passing it the request. Is there any way to control request buffering in Apache? The log shows the following error whenever the crash happens: [Wed Jun 29 18:35:27 2011] [error] cgid daemon process died, restarting Here's my virtual host's configuration: <VirtualHost *:8080> ServerName example.com ErrorLog /var/log/apache2/error.log WSGIScriptAlias / <path to django.wsgi> WSGIPassAuthorization on WSGIDaemonProcess example.com WSGIProcessGroup example.com XSendFileAllowAbove on XSendFile on </VirtualHost>

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >