Search Results

Search found 26692 results on 1068 pages for 'virtual private cloud'.

Page 89/1068 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Is Private Bytes >> Working Set?

    - by Jacob
    OK, this may sound weird, but here goes. There are 2 computers, A (Pentium D) and B (Quad Core) with almost the same amount of RAM running Windows XP. If I run the same code on both computers, the allocated private bytes in A never goes down resulting in a crash later on. In B it looks like the private bytes is constantly deallocated and everything looks fine. In both computers, the working set is deallocated and allocated similarly. Could this be an issue with manifests or DLLs (system)? I'm clueless. Note: I observed the utilized memory with Process Explorer. Question: During execution (where we have several allocations and deallocations) is it normal for the number of private bytes to be much bigger (1.5 GB vs 70 MB) than the working set?

    Read the article

  • Offer access to a private page without login

    - by dccarmo
    So I've been struggling with a nice and easy way to allow users to access a private page without asking them to fill out a login/password form. What I'm thinking about using right now is for each private page I generate a uniqueid (using php uniqid function) and then send the URI to the user. He would access his private page as "www.mywebsite.com/private_page/13ffa2c4a". I think it's relatively safe and user friendly, without asking too much of information. I thought maybe when the user access this page it would ask for it's e-mail just to be sure, but the best would be nothing at all. Is this really safe? I mean not internet banking safe, but enough for a simple access? Do you think there's a better solution? Thanks. :)

    Read the article

  • Windows 2008, IIS7 and virtual directories

    - by Thomas
    I created a virtual directory called test (C:\test) under the Default Web Site and added two simple test files (one html and one aspx). I thought I had to add the IUSR and NetworkService (for application pools) to C:\test and grant the users appropriate rights in order for IIS7 to serve the content. It appears that is not the case at all as I can view any files in the virtual directory (even if I convert it to an application) without changing or adding any security settings on the C:\test folder. I just installed IIS7 with ASP.NET on Windows 2008 without changing any settings besides adding the virtual directory. Am I missing something? Even my book on IIS7 states that the user accounts should be added an appropriate rights should be added. I added the following to answer the comments: I am referencing the file using a public IP http://xxx.xxx.xxx.xxx/test/one.html and the IP nor localhost is in my trusted sites. I am not signed in on the server at all as I am accessing the content from my home machine and the content is on my production server. The following users/groups have access to c:\test on the server (Creator Owner, System, Administrators, Users) and the app pool is running under the default NetworkService account. I basically installed win2008, added the IIS role with asp.net. I then opened IIS7, added a virtual directory and copied two files to the directory to test. It works which is great but I want to understand why it works. How is it that IIS7 can access files in the C:\test folder without any permissions set.

    Read the article

  • Virtual SMTP not sending mails

    - by DoStuffZ
    Hi I have been googling for the better part of the last two hours without finding any conclusion. My mails are not being sent from the production webserver. If I stop/start the Virtual SMTP I get this in the event log: No usable TLS server certificate for SMTP virtual server instance '1' could be found. TLS will be disabled for this virtual-server. We recently updated the webapplication running and I assume something went amiss during that. Googling the message straight up gave me a list that just as well could have been in greek. I found a security certificate on the server, installing that gave no change. I basicly played russian roulette with the certificate file (.cer), though I was somewhat certain it would not have a negative effect. (Russian roulette with a 6 chamber gun and 2 bullets.) I found a .pfx in our local documentation folder, though I'm far from certain that to have a positive effect. (6 chambers and 5 bullets). I found a site describing how the Virtual SMTP - Properties - Access - should have a button saying Certificates. I have a text saying "Did not find any TLS certificates" and a grayed out tick box saying "Require TLS certificate". I found the TLS being SSL ver3.1+ (3.1-3.3). So question goes - How do I enable the SMTP to once again send emails, like before.

    Read the article

  • high virtual memory usage in openvz?

    - by freedrull
    We're having a lot of memory problems on a new OpenVZ box. It is supposed to have 1 gig of memory, I'm not sure how much of that is burstable or guaranteed memory. Programs in general seem to take up more virtual memory than they do on my box at home, and on our other OpenVZ box. I wrote this simple C program: #include <stdio.h> #include <stdlib.h> int main(){ char *thingy = malloc(500); getchar(): return 0; } So it simply allocates 500 bytes and then returns. I ran the program on 3 computers. On my home machine, and our other OpenVZ box it shows about 1k bytes of virtual memory being used. On the new problematic machine its about 3k. I know this is just virtual memory and not resident memory, but why is this machine allocating so much virtual memory? Are there some settings I need to adjust to the OpenVZ memory settings? I tried changing the stack size with ulimit -s 256 and restarting some demons, but I still saw the same results. I'm doing all of my monitoring with htop, is this even a good program to use with a OpenVZ vps? I've read I should be parsing the output of /proc/user_beancounters intead or something.

    Read the article

  • Virtual Wifi Issue Windows 7

    - by Matt
    Lately I've been trying to use my laptop as a wireless router in my room. I have it connected to my school's network through ethernet, and I want to set up wireless so that I can use Wifi on my Android phone and iPod Touch. In the past, I used Connectify, but I started having an issue where my phone would find the network, connect, attempt to get the IP, and then suddenly the network would disappear. Then it'd pop up again, and the same process would happen over and over. I decided that I'd totally uninstall Connectify, but after that, neither Virtual Router Manager nor the command prompt could create a viable network either. My phone and even my iPod now encounter the same problem. Neither can successfully connect. So evidently there is something wrong with the laptop's virtual wifi feature, and I have no idea what that could be. I've tried enabling certain services that virtual wifi supposedly relies on, but some of them don't start, namely Remote Access Connection Manager. But I also have read that these enable on their own and that if they are normally not enabled it's fine. Furthermore, I even uninstalled and reinstalled the drivers for my wireless card. Any ideas as to why my virtual wifi won't function? Anything? I really would love to get this working...

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • trying to figure out how to bridge two virtual networks together and in turn bridge that to the internet for a virtual inline IDS/IPS system

    - by Tony robinson
    I'm trying to figure out how to bridge two vmware (server or workstation, workstation) or virtualbox networks together with a linux IDS/IPS system transparently inline between both the virtual networks. How do I accomplish this? I understand how to bridge to virtual networks together, but how to I make the linux virtual machine sit between them and force traffic to go across the transparent bridge? I would like to have something along the lines of: vmnet a various vms host-only network ---- inline linux box vmnet a boxes forced to go through here to get to the internet --- vmnet b network with internet access configured as either NAT or bridged -- internet I know that basically the linux box needs two virtual nics, one on vmnet a and vmnet b, but other than that, I don't know how to force all the traffic to go across the "transparent" bridging linux box on its way to the internet. Do vmnet a and b have to be the same ip network with the same default route? does vmnet a not have a default route and vmnet b have a default route? I've read in vmware forums that on the linux host you need to change permissions on the vmnet files for promiscuous mode? is this true? how do you configure this scenario on a windows box?

    Read the article

  • pfSense Load Balancer and Virtual IP

    - by jshin47
    I have two identical web servers on 10.2.1.13 and 10.2.1.113. I would like to set up pfSense load balancer to balance requests to both of these. I set up pools that included HTTP and HTTPS for both of these hosts, then set up virtual servers that responded on HTTP and HTTPS and referred traffic to its respective pool. However, I set up the virtual server to listen on 10.2.1.213, a LAN IP rather than a WAN IP, because I want LAN traffic to be able use the load balancer virtual server as well. So, I set up a Virtual IP for 10.2.1.213 on LAN IP, and a NAT port forwarding rule for HTTP and HTTPS traffic on a WAN IP to forward to 10.2.1.213. It seems like this should work, but it fails. What eventually happens is that when I try to access the page from WAN, I am directed to the login page for my pfSense device rather than the page I am expecting. When I try to access 10.2.1.213 from LAN, the request times out. What is going wrong here? I have tried it with and without NAT reflection to no avail. Please advise

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • How to extend a Linux PV partition online after virtual disk growth

    - by Yves Martin
    VMware allows to extend the size of a virtual disk online - when the VM is running. The next expected steps for Linux system are: extend the partition: delete and create a larger one with fdisk extend the PV size with pvresize use free extents for lvresize operations and then resize2fs for file system But I am stuck on the first step: fdisk and sfdisk still display the old size for the disk. My disk is a SCSI virtual disk connected thanks to the virtual LSI Logic controller. How to refresh the virtual disk size and partition table information available in Linux kernel without reboot ? As far as I know all that steps are possible for a running Windows, without reboot and even without any user actions thanks to VMWare tools. On Linux, I expects to do all steps online too and I already know steps 2, 3 and 4 work online. But the first one - change partition size declared in the partition table (still) seems to require a reboot. Update: My system is a Debian Lenny with kernel 2.6.26 and the disk I have extended is the main disk with a large PV containing the "root" LV for "/".

    Read the article

  • How to resolve virtual disk degraded in Windows Server 2012

    - by harrydev
    I am using the new Storage Spaces feature in Windows Server 2012. I have the following disks: FriendlyName CanPool OperationalStatus HealthStatus Usage Size ------------ ------- ----------------- ------------ ----- ---- PhysicalDisk2 False OK Healthy Auto-Select 2.73 TB PhysicalDisk3 False OK Healthy Auto-Select 2.73 TB PhysicalDisk4 False OK Healthy Auto-Select 2.73 TB PhysicalDisk5 False OK Healthy Auto-Select 2.73 TB There is also a separate OS disk. The above disks are part of a single storage pool: FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly ------------ ----------------- ------------ ------------ ---------- Pool OK Healthy False False Within this storage pool some virtual disks are defined, see below: FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Docs Mirror OK Healthy False 500 GB Data Mirror Degraded Warning False 500 GB Work Mirror Degraded Warning False 2 TB Now the virtual disks are all running normal 2-way mirror, but two of the virtual disks are degraded. This is probably because one of the physical disks was offline for a short period of time. However, now the virtual disk cannot be repaired, even though, all physical disks are healthy. There is plenty of available space in the storage pool. This I cannot understand so I was hoping for some help, on how to resolve this? Below I have listed the full output from the Get-VirtualDisk CmdLet for the "Work" disk: ObjectId : {XXXXXXXX} PassThroughClass : PassThroughIds : PassThroughNamespace : PassThroughServer : UniqueId : XXXXXXXX Access : Read/Write AllocatedSize : 412316860416 DetachedReason : None FootprintOnPool : 824633720832 FriendlyName : Work HealthStatus : Warning Interleave : 262144 IsDeduplicationEnabled : False IsEnclosureAware : False IsManualAttach : False IsSnapshot : False LogicalSectorSize : 512 Name : NameFormat : NumberOfAvailableCopies : 0 NumberOfColumns : 2 NumberOfDataCopies : 2 OperationalStatus : Degraded OtherOperationalStatusDescription : OtherUsageDescription : Disk for data being worked on (not backed up) ParityLayout : PhysicalDiskRedundancy : 1 PhysicalSectorSize : 4096 ProvisioningType : Thin RequestNoSinglePointOfFailure : True ResiliencySettingName : Mirror Size : 2199023255552 UniqueIdFormat : Vendor Specific UniqueIdFormatDescription : Usage : Other PSComputerName :

    Read the article

  • Automating first time login process in Windows Server 2008 R2 SP1 virtual machine

    - by George Durzi
    I have a set of Windows 2008 Server R2 SP1 Enterprise Edition virtual machines running in Hyper-V. The host server has 64GB of RAM and two SSD drives (one drive for the host OS, and the second one for the VMs). The virtual machines are as follows: Domain Controller: 4GB RAM Exchange Server: 4GB RAM Terminal Services: 50GB RAM We use this setup for a travelling training class where users remote desktop to one of the VMs - let's call it the Terminal Services or "TS" VM - where tools such as Visual Studio are installed. The students go through some labs on the TS VMs in Visual Studio. Overall, this setup works great. However, when users are collectively logging in for the first time, the VM really struggles to keep up while all the user profiles are created. It can take some users up to 10 minutes to login. The number varies from 30 to 40 students. A workaround to this would be to manually remote desktop to the TS virtual machine using all the accounts to ensure that the local profile is created in advance. I'm looking for a way to automate the first time login process on the TS virtual machine. I am envisioning iterating through the accounts in a certain Active Directory OU, and then somehow initiating a remote desktop session to the TS VM to log them in for the first time. Are there ways to do this? Thanks

    Read the article

  • Virtual Machine Network Services causes networking problems in Vista Enterprise 64 bit install

    - by Bill
    I have a Quad-core/8GB Vista Enterprise 64-bit (SP2) installation on which I installed Virtual PC 2007. I have a problem that is opposite of all that I found searching around the Internet--everybody has problems making network connections from their guest VM. When Virtual Machine Network Services is enabled in the protocol stack for my network card across a reboot, it causes access problems to the network. The amount of time to login in using a domain credentialed account is upwards of 3 minutes, then after reaching the desktop the network and sharing center shows that my connection to the domain is unauthenticated. Disabling and re-enabling the Virtual Machine Network Services (uncheck in network properties/apply/recheck/apply) fixes the problem. And as long as I have the VMNS disabled when I shutdown the restart runs smoothly. I just have to remember to enable after login and disable before shutdown. I have un-installed and re-installed Virtual PC 2007 multiple times with restarts between. The install consists of the SP1 + a KB patch for guest resolution fix. Any help would be greatly appreciated. Some additional information... At one point during my hairpulling and teethgnashing with this, I tried to ping my primary DC and observed some weird responses: (Our DC is 10.10.10.25, my dynamic IP was 10.10.10.203) Reply from 10.10.10.203, Destination host unreachable. Request timed out. Reply from 10.10.10.25: ... This is not consistently repeatable, but thought it might strike a chord with someone.

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • KVM virtual machine unable to access internet

    - by peachykeen
    I have KVM set up to run a virtual machine (Windows Home Server 2011 acting as a build agent) on a dedicated server (CentOS 6.3). Recently, I ran updates on the host, and the virtual machine is now unable to connect to the internet. The virtual network is running through NAT, the host has an interface (eth0:0) set up with a static IP (virt-manager shows the network and its IP correctly), and all connections to that IP should be sent to the guest. The host and guest can ping one another, but the guest cannot ping anything above the host, nor can I ping the guest from anywhere else (I can ping the host). Results from the guest to another server under my control and from an external system to the guest both return "Destination port unreachable". Running tcpdump on the host and destination shows the host replying to the ping, but the destination never sees it (it doesn't even look like the host is bothering to send it on at all, which leads me to suspect iptables). The ping output matches that, listing replies from 192.168.100.1. The guest can resolve DNS, however, which I find rather odd. The guest's network settings (connection TCP/IPv4 properties) are set up with a static local IP (192.168.100.128), mask of 255.255.255.0, and gateway and DNS at 192.168.100.1. When originally setting up the vm/net, I had set up some iptables rules to enable bridging, but after my hosting company complained about the bridge, I set up a new virtual net using NAT and believe I removed all the rules. The VM's network was working perfectly fine for the last few months, until yesterday. I haven't heard anything from the hosting company, didn't change anything on the guest, so as far as I know, nothing else has changed (unfortunately the list of packages updated has since fallen off scrollback and I didn't note it down).

    Read the article

  • How to access Virtual machine using powershell script

    - by Sheetal
    I want to access the virtual machine using powershell script. For that I used below script, Enter-PSSession -computername sheetal-VDD -credential compose04.com\abc.xyz1 where, sheetal-VDD is hostname of virtual machine compose04.com is the domain name of virtual machine and abc.xyz1 is the username of virtual machine After entering above command , it asks for password. When the password is entered I get below error, Enter-PSSession : Connecting to remote server failed with the following error message : WinRM cannot process the reques t. The following error occured while using Kerberos authentication: There are currently no logon servers available to s ervice the logon request. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or us e HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + Enter-PSSession <<<< -computername sheetal-VDD -credential compose04.com\Sheetal.Varpe + CategoryInfo : InvalidArgument: (sheetal-VDD:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed Can someone help me out in this?

    Read the article

  • How do I tunnel an HTTPS proxy through a virtual machine (VMWare)

    - by Kyle
    I have a personal setup at home using VMWare Workstation. I also have a set of Virtual Private Machines that run Squid, and therefore provide me HTTPS proxy tunnels. Using Proxifier, I can tunnel all traffic for given applications through these tunnels. However, I also have a few virtual machines for dev/staging/experimentation/etc. I generally just use NAT to provide Internet access to the machines, and if I need to use these proxies, I can just setup Proxifier (or a Linux equivalent) to pipe the traffic through them. No problem. But... I got to thinking: Wouldn't it be great if I could assign these proxy tunnels to a virtual machine, so that when I start up the VM, it has instant-on access through the tunnel and not my local connection? (EDIT: Of course, it would USE my local connection, but it would tunnel traffic through the proxy.) To be more clear: I want a solution that binds the proxy to a VM, so that when I start the VM, I don't have to use a proxy client to connect to the tunnel - I am already piping all traffic from that VM through that proxy. I did a bit of searching, and the closest thing I could find was this: How to route public static IP to a virtual machine on a vmware ESXi host? Which wasn't all that applicable. The proxies are protected by user/pass but do not filter by IP. Again, they are HTTPS proxies setup through Squid. Any ideas on how to make this happen? Thanks a ton.

    Read the article

  • Host AngularJS (Html5Mode) in ASP.NET vNext

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/06/10/host-angularjs-html5mode-in-asp.net-vnext.aspxMicrosoft had announced ASP.NET vNext in BUILD and TechED recently and as a developer, I found that we can add features into one ASP.NET vNext application such as MVC, WebAPI, SignalR, etc.. Also it's cross platform which means I can host ASP.NET on Windows, Linux and OS X.   If you are following my blog you should knew that I'm currently working on a project which uses ASP.NET WebAPI, SignalR and AngularJS. Currently the AngularJS part is hosted by Express in Node.js while WebAPI and SignalR are hosted in ASP.NET. I was looking for a solution to host all of them in one platform so that my SignalR can utilize WebSocket. Currently AngularJS and SignalR are hosted in the same domain but different port so it has to use ServerSendEvent. It can be upgraded to WebSocket if I host both of them in the same port.   Host AngularJS in ASP.NET vNext Static File Middleware ASP.NET vNext utilizes middleware pattern to register feature it uses, which is very similar as Express in Node.js. Since AngularJS is a pure client side framework in theory what I need to do is to use ASP.NET vNext as a static file server. This is very easy as there's a build-in middleware shipped alone with ASP.NET vNext. Assuming I have "index.html" as below. 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="angular.js" /> 4: <script type="text/javascript" src="angular-ui-router.js" /> 5: <script type="text/javascript" src="app.js" /> 6: </head> 7: <body> 8: <h1>ASP.NET vNext with AngularJS</h1> 9: <div> 10: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 11: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 12: </div> 13: <div data-ui-view></div> 14: </body> 15: </html> And the AngularJS JavaScript file as below. Notices that I have two views which only contains one line literal indicates the view name. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15: }]); 16:  17: app.controller('View1Ctrl', function ($scope) { 18: }); 19:  20: app.controller('View2Ctrl', function ($scope) { 21: }); All AngularJS files are located in "app" folder and my ASP.NET vNext files are besides it. The "project.json" contains all dependencies I need to host static file server. 1: { 2: "dependencies": { 3: "Helios" : "0.1-alpha-*", 4: "Microsoft.AspNet.FileSystems": "0.1-alpha-*", 5: "Microsoft.AspNet.Http": "0.1-alpha-*", 6: "Microsoft.AspNet.StaticFiles": "0.1-alpha-*", 7: "Microsoft.AspNet.Hosting": "0.1-alpha-*", 8: "Microsoft.AspNet.Server.WebListener": "0.1-alpha-*" 9: }, 10: "commands": { 11: "web": "Microsoft.AspNet.Hosting server=Microsoft.AspNet.Server.WebListener server.urls=http://localhost:22222" 12: }, 13: "configurations" : { 14: "net45" : { 15: }, 16: "k10" : { 17: "System.Diagnostics.Contracts": "4.0.0.0", 18: "System.Security.Claims" : "0.1-alpha-*" 19: } 20: } 21: } Below is "Startup.cs" which is the entry file of my ASP.NET vNext. What I need to do is to let my application use FileServerMiddleware. 1: using System; 2: using Microsoft.AspNet.Builder; 3: using Microsoft.AspNet.FileSystems; 4: using Microsoft.AspNet.StaticFiles; 5:  6: namespace Shaun.AspNet.Plugins.AngularServer.Demo 7: { 8: public class Startup 9: { 10: public void Configure(IBuilder app) 11: { 12: app.UseFileServer(new FileServerOptions() { 13: EnableDirectoryBrowsing = true, 14: FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "app")) 15: }); 16: } 17: } 18: } Next, I need to create "NuGet.Config" file in the PARENT folder so that when I run "kpm restore" command later it can find ASP.NET vNext NuGet package successfully. 1: <?xml version="1.0" encoding="utf-8"?> 2: <configuration> 3: <packageSources> 4: <add key="AspNetVNext" value="https://www.myget.org/F/aspnetvnext/api/v2" /> 5: <add key="NuGet.org" value="https://nuget.org/api/v2/" /> 6: </packageSources> 7: <packageSourceCredentials> 8: <AspNetVNext> 9: <add key="Username" value="aspnetreadonly" /> 10: <add key="ClearTextPassword" value="4d8a2d9c-7b80-4162-9978-47e918c9658c" /> 11: </AspNetVNext> 12: </packageSourceCredentials> 13: </configuration> Now I need to run "kpm restore" to resolve all dependencies of my application. Finally, use "k web" to start the application which will be a static file server on "app" sub folder in the local 22222 port.   Support AngularJS Html5Mode AngularJS works well in previous demo. But you will note that there is a "#" in the browser address. This is because by default AngularJS adds "#" next to its entry page so ensure all request will be handled by this entry page. For example, in this case my entry page is "index.html", so when I clicked "View 1" in the page the address will be changed to "/#/view1" which means it still tell the web server I'm still looking for "index.html". This works, but makes the address looks ugly. Hence AngularJS introduces a feature called Html5Mode, which will get rid off the annoying "#" from the address bar. Below is the "app.js" with Html5Mode enabled, just one line of code. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: // enable html5mode 17: $locationProvider.html5Mode(true); 18: }]); 19:  20: app.controller('View1Ctrl', function ($scope) { 21: }); 22:  23: app.controller('View2Ctrl', function ($scope) { 24: }); Then let's went to the root path of our website and click "View 1" you will see there's no "#" in the address. But the problem is, if we hit F5 the browser will be turn to blank. This is because in this mode the browser told the web server I want static file named "view1" but there's no file on the server. So underlying our web server, which is built by ASP.NET vNext, responded 404. To fix this problem we need to create our own ASP.NET vNext middleware. What it needs to do is firstly try to respond the static file request with the default StaticFileMiddleware. If the response status code was 404 then change the request path value to the entry page and try again. 1: public class AngularServerMiddleware 2: { 3: private readonly AngularServerOptions _options; 4: private readonly RequestDelegate _next; 5: private readonly StaticFileMiddleware _innerMiddleware; 6:  7: public AngularServerMiddleware(RequestDelegate next, AngularServerOptions options) 8: { 9: _next = next; 10: _options = options; 11:  12: _innerMiddleware = new StaticFileMiddleware(next, options.FileServerOptions.StaticFileOptions); 13: } 14:  15: public async Task Invoke(HttpContext context) 16: { 17: // try to resolve the request with default static file middleware 18: await _innerMiddleware.Invoke(context); 19: Console.WriteLine(context.Request.Path + ": " + context.Response.StatusCode); 20: // route to root path if the status code is 404 21: // and need support angular html5mode 22: if (context.Response.StatusCode == 404 && _options.Html5Mode) 23: { 24: context.Request.Path = _options.EntryPath; 25: await _innerMiddleware.Invoke(context); 26: Console.WriteLine(">> " + context.Request.Path + ": " + context.Response.StatusCode); 27: } 28: } 29: } We need an option class where user can specify the host root path and the entry page path. 1: public class AngularServerOptions 2: { 3: public FileServerOptions FileServerOptions { get; set; } 4:  5: public PathString EntryPath { get; set; } 6:  7: public bool Html5Mode 8: { 9: get 10: { 11: return EntryPath.HasValue; 12: } 13: } 14:  15: public AngularServerOptions() 16: { 17: FileServerOptions = new FileServerOptions(); 18: EntryPath = PathString.Empty; 19: } 20: } We also need an extension method so that user can append this feature in "Startup.cs" easily. 1: public static class AngularServerExtension 2: { 3: public static IBuilder UseAngularServer(this IBuilder builder, string rootPath, string entryPath) 4: { 5: var options = new AngularServerOptions() 6: { 7: FileServerOptions = new FileServerOptions() 8: { 9: EnableDirectoryBrowsing = false, 10: FileSystem = new PhysicalFileSystem(System.IO.Path.Combine(AppDomain.CurrentDomain.BaseDirectory, rootPath)) 11: }, 12: EntryPath = new PathString(entryPath) 13: }; 14:  15: builder.UseDefaultFiles(options.FileServerOptions.DefaultFilesOptions); 16:  17: return builder.Use(next => new AngularServerMiddleware(next, options).Invoke); 18: } 19: } Now with these classes ready we will change our "Startup.cs", use this middleware replace the default one, tell the server try to load "index.html" file if it cannot find resource. The code below is just for demo purpose. I just tried to load "index.html" in all cases once the StaticFileMiddleware returned 404. In fact we need to validation to make sure this is an AngularJS route request instead of a normal static file request. 1: using System; 2: using Microsoft.AspNet.Builder; 3: using Microsoft.AspNet.FileSystems; 4: using Microsoft.AspNet.StaticFiles; 5: using Shaun.AspNet.Plugins.AngularServer; 6:  7: namespace Shaun.AspNet.Plugins.AngularServer.Demo 8: { 9: public class Startup 10: { 11: public void Configure(IBuilder app) 12: { 13: app.UseAngularServer("app", "/index.html"); 14: } 15: } 16: } Now let's run "k web" again and try to refresh our browser and we can see the page loaded successfully. In the console window we can find the original request got 404 and we try to find "index.html" and return the correct result.   Summary In this post I introduced how to use ASP.NET vNext to host AngularJS application as a static file server. I also demonstrated how to extend ASP.NET vNext, so that it supports AngularJS Html5Mode. You can download the source code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CEN/CENELEC Lacks Perspective

    - by trond-arne.undheim
    Over the last few months, two of the European Standardization Organizations (ESOs), CEN and CENELEC have circulated an unfortunate position statement distorting the facts around fora and consortia. For the benefit of outsiders to this debate, let's just say that this debate regards whether and how the EU should recognize standards and specifications from certain fora and consortia based on a process evaluating the openness and transparency of such deliverables. The topic is complex, and somewhat confusing even to insiders, but nevertheless crucial to the European economy. As far as I can judge, their positions are not based on facts. This is unfortunate. For the benefit of clarity, here are some of the observations they make: a)"Most consortia are in essence driven by technology companies making hardware and software solutions, by definition very few of the largest ones are European-based". b) "Most consortia lack a European presence, relevant Committees, even those that are often cited as having stronger links with Europe, seem to lack an overall, inclusive set of participants". c) "Recognising specific consortia specifications will not resolve any concrete problems of interoperability for public authorities; interoperability depends on stringing together a range of specifications (from formal global bodies or consortia alike)". d) "Consortia already have the option to have their specifications adopted by the international formal standards bodies and many more exercise this than the two that seem to be campaigning for European recognition. Such specifications can then also be adopted as European standards." e) "Consortium specifications completely lack any process to take due and balanced account of requirements at national level - this is not important for technologies but can be a critical issue when discussing cross-border issues within the EU such as eGovernment, eHealth and so on". f) "The proposed recognition will not lead to standstill on national or European activities, nor to the adoption of the specifications as national standards in the CEN and CENELEC members (usually in their official national languages), nor to withdrawal of conflicting national standards. A big asset of the European standardization system is its coherence and lack of fragmentation." g) "We always miss concrete and specific examples of where consortia referencing are supposed to be helpful." First of all, note that ETSI, the third ESO, did not join the position. The reason is, of course, that ETSI beyond being an ESO, also has a global perspective and, moreover, does consider reality. Secondly, having produced arguments a) to g), CEN/CENELEC has the audacity to call a meeting on Friday 25 February entitled "ICT standardization - improving collaboration in Europe". This sounds very nice, but they have not set the stage for constructive debate. Rather, they demonstrate a striking lack of vision and lack of perspective. I will back this up by three facts, and leave it there. 1. Since the 1980s, global industry fora and consortia, such as IETF, W3C and OASIS have emerged as world-leading ICT standards development organizations with excellent procedures for openness and transparency in all phases of standards development, ex post and ex ante. - Practically no ICT system can be built without using fora and consortia standards (FCS). - Without using FCS, neither the Internet, upon which the EU economy depends, nor EU institutions would operate. - FCS are of high relevance for achieving and promoting interoperability and driving innovation. 2. FCS are complementary to the formally recognized standards organizations including the ESOs. - No work will be taken away from the ESOs should the EU recognize certain FCS. - Each FCS would be evaluated on its merit and on the openness of the process that produced it. ESOs would, with other stakeholders, have a say. - ESOs could potentially educate and assist European stakeholders to engage more actively and constructively with FCS. - ETSI, also an ESO, seems to clearly recognize these facts. 3. Europe and its Member States have a strong voice in several of the most relevant global industry fora and consortia. - W3C: W3C was founded in 1994 by an Englishman, Sir Tim Berners-Lee, in collaboration with CERN, the European research lab. In April 1995, INRIA (Institut National de Recherche en Informatique et Automatique) in France became the first European W3C host and in 2003, ERCIM (European Research Consortium in Informatics and Mathematics), also based in France, took over the role of European W3C host from INRIA. Today, W3C has 326 Members, 40% of which are European. Government participation is also strong, and it could be increased - a development that is very much desired by W3C. Current members of the W3C Advisory Board includes Ora Lassila (Nokia) and Charles McCathie Nevile (Opera). Nokia is Finnish company, Opera is a Norwegian company. SAP's Claus von Riegen is an alumni of the same Advisory Board. - OASIS: its membership - 30% of which is European - represents the marketplace, reflecting a balance of providers, user companies, government agencies, and non-profit organizations. In particular, about 15% of OASIS members are governments or universities. Frederick Hirsch from Nokia, Claus von Riegen from SAP AG and Charles-H. Schulz from Ars Aperta are on the Board of Directors. Nokia is a Finnish company, SAP is a German company and Ars Aperta is a French company. The Chairman of the Board is Peter Brown, who is an Independent Consultant, an Austrian citizen AND an official of the European Parliament currently on long-term leave. - IETF: The oversight of its activities is by the Internet Architecture Board (IAB), since 2007 chaired by Olaf Kolkman, a Dutch national who lives in Uithoorn, NL. Kolkman is director of NLnet Labs, a foundation chartered to develop open source software and open source standards for the Internet. Other IAB members include Marcelo Bagnulo whose affiliation is the University Carlos III of Madrid, Spain as well as Hannes Tschofenig from Nokia Siemens Networks. Nokia is a Finnish company. Siemens is a German company. Nokia Siemens is a European joint venture. - Member States: At least 17 European Member States have developed Interoperability Frameworks that include FCS, according to the EU-funded National Interoperability Framework Observatory (see list and NIFO web site on IDABC). This also means they actively procure solutions using FCS, reference FCS in their policies and even in laws. Member State reps are free to engage in FCS, and many do. It would be nice if the EU adjusted to this reality. - A huge number of European nationals work in the global IT industry, on European soil or elsewhere, whether in EU registered companies or not. CEN/CENELEC lacks perspective and has engaged in an effort to twist facts that is quite striking from a publicly funded organization. I wish them all possible success with Friday's meeting but I fear all of the most important stakeholders will not be at the table. Not because they do not wish to collaborate, but because they just have been insulted. If they do show up, it would be a gracious move, almost beyond comprehension. While I do not expect CEN/CENELEC to line up perfectly in favor of fora and consortia, I think it would be to their benefit to stick to more palatable observations. Actually, I would suggest an apology, straightening out the facts. This works among friends and it works in an organizational context. Then, we can all move on. Standardization is important. Too important to ignore. Too important to distort. The European economy depends on it. We need CEN/CENELEC. It is an important organization. But CEN/CENELEC needs fora and consortia, too.

    Read the article

  • Working with Lightweight User Interface Toolkit (LWUIT) 1.4

    - by janice.heiss(at)oracle.com
    Vikram Goyal's informative and practical article, "Working with Lightweight User Interface Toolkit (LWUIT) 1.4," shows developers how to best take advantage of LWUIT 1.4. LWUIT is a user interface library designed to bring uniformity and cross mobile interface functionality to applications developed using Java Platform, Micro Edition (Java ME). Version 1.4 offers support for XHTML, multi-line text fields, and customization to the virtual keyboard.Goyal notes in the article that, "Perhaps the most important feature of this release is the ability for LWUIT to support XHTML. Specifically, it now supports XHTML MP (Mobile Platform) 1.0, a version of XHTML designed for mobile phones. To be even more specific, it now supports CSS styling for the HTMLComponent within the LWUIT library through Wireless Application Protocol CSS (WCSS)." Read the entire article here. 

    Read the article

  • Improve Engineering & Construction Project Productivity

    - by [email protected]
    Driving successful project delivery and providing greater value and return for all stakeholders are key goals for firms in the engineering and construction industry. However, increasingly complex construction projects, compressed schedules, ineffective collaboration among project stakeholders, and limited interoperability, can get in the way of these goals and lead to reduced productivity. What E&C firms need are solutions that will improve global team collaboration, optimize processes and better communicate electronic project data. Check out the AutoVue for Engineering and Construction Solution Brief and learn how AutoVue enterprise visualization solutions can: - improve global project collaboration and communication - improve data interoperability - support virtual design and construction projects - improve change management and maintain accountability

    Read the article

  • nginx reverse proxy cannot access apache virtual hosts

    - by Sc0rian
    I am setting up nginx as a reverse proxy. The server runs on directadmin and lamp stack. I have nginx running on port 81. I can access all my sites (including virtual ips) on the port 81. However when I forward the traffic from port 80 to 81, the virtual ips have a message saying "Apache is running normally". Server IPs are fine, and I can still access virtual IP's on 81. [root@~]# netstat -an | grep LISTEN | egrep ":80|:81" tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <virtual ip>:81 0.0.0.0:* LISTEN tcp 0 0 <serverip>:81 0.0.0.0:* LISTEN tcp 0 0 :::80 :::* LISTEN apache 24090 0.6 1.3 29252 13612 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24092 0.9 2.1 39584 22056 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24096 0.2 1.9 35892 20256 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24120 0.3 1.7 35752 17840 ? S 18:34 0:00 /usr/sbin/httpd -k start -DSSL apache 24495 0.0 1.4 30892 14756 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24496 1.0 2.1 39892 22164 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24516 1.5 3.6 55496 38040 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24519 0.1 1.2 28996 13224 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24521 2.7 4.0 58244 41984 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24522 0.0 1.2 29124 12672 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24524 0.0 1.1 28740 12364 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24535 1.1 1.7 36008 17876 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24536 0.0 1.1 28592 12084 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24537 0.0 1.1 28592 12112 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24539 0.0 0.0 0 0 ? Z 18:35 0:00 [httpd] <defunct> apache 24540 0.0 1.1 28592 11540 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL apache 24541 0.0 1.1 28592 11548 ? S 18:35 0:00 /usr/sbin/httpd -k start -DSSL root 24548 0.0 0.0 4132 752 pts/0 R+ 18:35 0:00 egrep apache|nginx root 28238 0.0 0.0 19576 284 ? Ss May29 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf apache 28239 0.0 0.0 19888 804 ? S May29 0:00 nginx: worker process apache 28240 0.0 0.0 19888 548 ? S May29 0:00 nginx: worker process apache 28241 0.0 0.0 19736 484 ? S May29 0:00 nginx: cache manager process here is my nginx conf: cat /usr/local/nginx/conf/nginx.conf user apache apache; worker_processes 2; # Set it according to what your CPU have. 4 Cores = 4 worker_rlimit_nofile 8192; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server_tokens off; access_log /var/log/nginx_access.log main; error_log /var/log/nginx_error.log debug; server_names_hash_bucket_size 64; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 30; gzip on; gzip_comp_level 9; gzip_proxied any; proxy_buffering on; proxy_cache_path /usr/local/nginx/proxy_temp levels=1:2 keys_zone=one:15m inactive=7d max_size=1000m; proxy_buffer_size 16k; proxy_buffers 100 8k; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 60; server { listen <server ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <server host name> _; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<server ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } include /usr/local/nginx/vhosts/*.conf; } here is my vhost conf: # cat /usr/local/nginx/vhosts/1.conf server { listen <virt ip>:81 default rcvbuf=8192 sndbuf=16384 backlog=32000; # Real IP here server_name <virt domain name>.com ; # "_" is for handle all hosts that are not described by server_name charset off; access_log /var/log/nginx_host_general.access.log main; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://<virt ip>; # Real IP here client_max_body_size 16m; client_body_buffer_size 128k; proxy_buffering on; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 120; proxy_buffer_size 16k; proxy_buffers 32 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Apache config: <VirtualHost xxxxxx:80 > ServerName www.<domain>.com ServerAlias www.<domain>.com <domain>.com ServerAdmin webmaster@<domain>.com DocumentRoot /home/<domain>/domains/<domain>.com/public_html ScriptAlias /cgi-bin/ /home/<domain>/domains/<domain>.com/public_html/cgi-bin/ UseCanonicalName OFF <IfModule !mod_ruid2.c> SuexecUserGroup <domain> <domain> </IfModule> <IfModule mod_ruid2.c> RMode config RUidGid <domain> <domain> RGroups apache access </IfModule> CustomLog /var/log/httpd/domains/<domain>.com.bytes bytes CustomLog /var/log/httpd/domains/<domain>.com.log combined ErrorLog /var/log/httpd/domains/<domain>.com.error.log <Directory /home/<domain>/domains/<domain>.com/public_html> Options +Includes -Indexes php_admin_flag engine ON php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f <domain>@<domain>.com' </Directory> <virtual ip address>:80 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:16) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:107) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:151) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/xx/httpd.conf:195) <virtual ip address>:443 is a NameVirtualHost default server www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) port 443 namevhost www.xx.com (/usr/local/directadmin/data/users/xx/httpd.conf:61) <server ip>:80 is a NameVirtualHost default server localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost localhost (/etc/httpd/conf/extra/httpd-vhosts.conf:29) port 80 namevhost www.xx.co.uk (/usr/local/directadmin/data/users/admin/httpd.conf:16)

    Read the article

  • Flaws in my PHP development setup - sharing sources causing lags

    - by Wiktor
    I have following development setup for my PHP projects: Working station running on Windows 7 with PhpStorm IDE. GIT for version controlling. CentOS on virtual machine (VirtualBox) with Apache and MySQL (copy of production server). So far, I've been sharing project's source folders between host and guest systems and it was working quite well only really slow. The reason behind this is that Apache was reading files from remote folder (mounted locally). After doing some research, I found out that this set up can be improved by using disk mapping (Samba) instead of folder sharing. So I did that change. I configured my PhpStorm to automatically deploy files to mapped drive. Everything works like a charm now, except for one problem - when I change branches I need to synchronize project's local folder with the one on mapped drive and that takes time, a lot of time (like branching in SVN). Is there another way to handle this than just working on files directly on mapped drive?

    Read the article

  • How do I take an image/backup of Ubuntu partition and restore to VirtualBox VM

    - by whizkid
    I have Ubuntu 10.04 installed on an older hard disk. I recently bought a new disk and already installed Windows 7. I dont want to use the older disk anymore, and I would like to keep on using Ubuntu in a virtual machine on the new disk(to avoid the possible mess-ups of dual boot and I found VirtualBox is the best free tool for this). I wish to keep the exact same data\programs\configurations\settings I had been using in Ubuntu for so long, and avoid the tedious part of having to reconfigure so many things. How do I backup\restore Ubuntu to another disk? I would prefer a free tool to do the backup\restore.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >