Search Results

Search found 16587 results on 664 pages for 'virtual hardware'.

Page 199/664 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • how to access samba shares via nat?

    - by HSM
    I have setup a virtual machine in Virtual Box and installed a Samba Server. I changed the guest operating system's NIC from Bridged to NAT for a reason that I can't remember. I then added a additional NIC in "host only adapter" mode. The Windows host OS now can access the Ubuntu 10.10 virtual server via the Host Only NIC. However, I can not access the Samba server running on the Ubuntu guest OS. I am not sure what to do now. How can I get the widows Host OS to access the Guest OS' samba server.

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • can't ping from virtualbox ubuntu server --nat connection

    - by George
    i set up an ubuntu server in virtual box following these and these instructions.My connection worked,also ssh . Then, i signed up in dyndns.com ,i configured the router ,but in the port forward i changed the 2222 port to 80 because it couldn’t forward from 2222.My port is open and accepting connections but i don’t have any more connection from the server in the virtual box. From virtual box settings-network-port forwarding , i use: Host IP :127.0.0.1 , Host port :80 , Guest Port :22 ,Guest IP : its empty. I am not sure if i am using right the address in /etc/network/interfaces , i use 192.168.0.2 and also this address i use for the firewall rules in the router. Also,in the server /etc/resolv.conf i put that address and in /etc/mysql/my.cnf. I can ping that address but nothing more.(of course neither ssh works) Thank you

    Read the article

  • Why doesn't Microsoft support virtualizing a Server OS on Windows 7?

    - by Nathan DeWitt
    Microsoft doesn't support any server operating systems in Windows Virtual PC. Virtual Server 2005 doesn't run on Windows 7. Hyper-V is great, but I don't want to run Server 2008 as my main OS, and I love having Windows 7 run on the bare metal. I don't want to mess around with a dual boot. My only option to continue developing in Windows 7 with a virtual server environment on hand is VMWare or VirtualBox. Other members in my team use Hyper-V, and VHDs are common. I'd prefer to be able to use their VHDs, so that leaves me VirtualBox. Does anyone know if Microsoft is planning on bringing server virtualization back to the workstation?

    Read the article

  • Ubuntu 13.10 install VMware 9.0

    - by user212290
    After I install the VMware workstation 9.0, while when I want open the VM, there come the dialogue "Before you can run VMware, several modules must be complied and loaded into the running kernel CANCEL INSTALL",while I clicked the INSTALL button, nothing happened. When: sudo apt-get install linux-headers-3.11.0-12-generic sudo /usr/bin/vmware-modconfig --icon=vmware-workstation --appname=VMware come: cc1: some warnings being treated as errors make[2]: *** [/tmp/modconfig-T9k19t/vmci-only/linux/driver.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [_module_/tmp/modconfig-T9k19t/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.11.0-12-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/modconfig-T9k19t/vmci-only' Failed to build vmci. Failed to execute the build command. Starting VMware services: Virtual machine monitor done Virtual machine communication interface failed VM communication interface socket family done Blocking file system done Virtual ethernet failed VMware Authentication Daemon done

    Read the article

  • Why Google Analytics is displaying wrong landing pages?

    - by Salman
    I see all of my pages as Landing Pages in Google Analytics which cannot be true as I did not post those pages anywhere and I don't see any traffic hitting directly to that page. Also, I am using virtual page views on few buttons and I see those virtual pages as Landing pages too. For example, /click/request-a-quote 35000 views 35000 is too big a number to be ignored. Even if I ignore Virtual Pages Views, I see a lot of pages as Landing Pages that I am 100% sure that visitors ( atleast not so many users) are NOT hitting directly. Any advice, how to debug it? PS: I'm using the following code: var _gaq = _gaq || []; _gaq.push(['_setAccount', '<']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['setLocalGifPath', '/images/_utm.gif']); _gaq.push(['_setAllowLinker', true]); _gaq.push(['_trackPageview','account/phase1']);

    Read the article

  • Need multiple sound outputs to multiple speakers

    - by Usman Sajeel Haider
    How do I play 3 different music tracks at the same time on my computer, such that song1 is played in speaker1, song2 in speaker2... Is this possible programatically? What additional hardware will I need? Do I need 3 seperate sound cards? Given that the hardware is in place, how would I "route" the sound output for a particular song to a particular speaker. Alternatively, is there a special hardware that can handle multiple inputs and outputs? Appreciate your expert opinions.

    Read the article

  • How do you repair an "input/output error" in an NTFS partition?

    - by Calixte
    I replaced a buggy Windows Vista installation with Ubuntu. All works fine except that the mais harddrive where I had all my files are now inaccessible. Here is the eror message I get: Error mounting: mount exited with exit code 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read NTFS $Bitmap: Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details Is it necessarily a hardware problem? If not, is there a way to repair the HD from Ubuntu?

    Read the article

  • Ubuntu 12.04 on MBPro, Early 2011, options

    - by sthysel
    I have a 2.2 GHz i7, 4GB MacBook Pro 8.3, AMD Radeon HD 6750M 1024 MB, Early 2011. As far as I see I have two options, running Ubuntu on this hardware: VMWare/Virtualbox and Ubuntu in a VM, I already ordered a 16G RAM upgrade for this. Wipe OSX and go Ubuntu native, with 16G RAM, yay ! I'm kinda leaning towards option 2 as I tend to spend 90% of my time in dev VM's at work anyway. All my other machines at home, and most at work are Ubuntu/Linux as well. I have a Mac mini on standby for the odd Itunes backup/sync. If I don't have to keep OSX around, I would like to get rid of it altogether. Ubuntu support on Mac hardware seems to be a hit and miss affair as far as I can tell. Does anyone have good success running a recent version of Ubuntu on this hardware ? Thanks

    Read the article

  • What can I do when the interviewer doesn't know the answer to his/her own question?

    - by bjskishore123
    Yesterday I had a terrible experience in an interview. Interviewer asked me about pure virtual function. I said, It may or may not have definition in base class, but derived classes should provide definition unless they also want to be abstract class. But interviewer kept on asking that "Can pure virtual have definition !!! ???"... I said yes. Again he said "Pure ?" I said yes. It is allowed, derived classes can explicitly call that function if they want that particular behavior. He sent me out. I am sure that he doesn't know the fact that pure virtual function can have definition. How to deal with this kind of Interviewers ? After asking 2nd time, should i lie that it can't have definition ? :) Or i should stick to my words and loose the job opportunity ?

    Read the article

  • How to achieve 'Activities' in Unity?

    - by Ralf Hersel
    I like the concept of an activities centric desktop and I wonder if this can be achieved in Unity. For me, an activity is a couple of applications that belong to the same subject, like 'photo manipulation', 'software development', 'office work', 'social activities', 'music and video'. I would like to utilize the virtual desktops to arrange applications that belong to the same activity group. Example: Desktop 1 contains all applications that belong to 'office work' Desktop 2 contains all applications that I need for 'software development' Desktop 3 contains all applications that I usually need for 'photo works' Therefore I would like to give names to the virtual desktops that reflect their purpose. And I would like Unity to auto-start the required applications when I start my computer or when I switch to one of the virtual desktops. Is this possible with Unity (or any other desktop)?

    Read the article

  • Win7 installation on VirtualBox

    - by Will
    I have Ubuntu 12.04 installed on my machine. I also installed Virtualbox so that I can run windows 7. I set my virtual box with 4 GB RAM and 25 GB virtual HDD and after I press Start, it doesnt boot from the windows dvd in the optical drive, but instead i get this error: Failed to open a session for the virtual machine win7. VT-x features locked or unavailable in MSR. (VERR_VMX_MSR_LOCKED_OR_DISABLED). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} Do you have any idea what is it and how I could overcome this? Thank you!

    Read the article

  • Windows Azure: Announcing release of Windows Azure SDK 2.2 (with lots of goodies)

    - by ScottGu
    Earlier today I blogged about a big update we made today to Windows Azure, and some of the great new features it provides. Today I’m also excited to also announce the release of the Windows Azure SDK 2.2. Today’s SDK release adds even more great features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter The below post has more details on what’s available in today’s Windows Azure SDK 2.2 release.  Also head over to Channel 9 to see the new episode of the Visual Studio Toolbox show that will be available shortly, and which highlights these features in a video demonstration. Visual Studio 2013 Support Version 2.2 of the Window Azure SDK is the first official version of the SDK to support the final RTM release of Visual Studio 2013. If you installed the 2.1 SDK with the Preview of Visual Studio 2013 we recommend that you upgrade your projects to SDK 2.2.  SDK 2.2 also works side by side with the SDK 2.0 and SDK 2.1 releases on Visual Studio 2012: Integrated Windows Azure Sign In within Visual Studio Integrated Windows Azure Sign-In support within Visual Studio is one of the big improvements added with this Windows Azure SDK release.  Integrated sign-in support enables developers to develop/test/manage Windows Azure resources within Visual Studio without having to download or use management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer inside Visual Studio and choose the “Connect to Windows Azure” context menu option to connect to Windows Azure: Doing this will prompt you to enter the email address of the account you wish to sign-in with: You can use either a Microsoft Account (e.g. Windows Live ID) or an Organizational account (e.g. Active Directory) as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio Server Explorer (and you can start using them): With this new integrated sign in experience you are now able to publish web apps, deploy VMs and cloud services, use Windows Azure diagnostics, and fully interact with your Windows Azure services within Visual Studio without the need for a management certificate.  All of the authentication is handled using the Windows Azure Active Directory associated with your Windows Azure account (details on this can be found in my earlier blog post). Integrating authentication this way end-to-end across the Service Management APIs + Dev Tools + Management Portal + PowerShell automation scripts enables a much more secure and flexible security model within Windows Azure, and makes it much more convenient to securely manage multiple developers + administrators working on a project.  It also allows organizations and enterprises to use the same authentication model that they use for their developers on-premises in the cloud.  It also ensures that employees who leave an organization immediately lose access to their company’s cloud based resources once their Active Directory account is suspended. Filtering/Subscription Management Once you login within Visual Studio, you can filter which Windows Azure subscriptions/regions are visible within the Server Explorer by right-clicking the “Filter Services” context menu within the Server Explorer.  You can also use the “Manage Subscriptions” context menu to mange your Windows Azure Subscriptions: Bringing up the “Manage Subscriptions” dialog allows you to see which accounts you are currently using, as well as which subscriptions are within them: The “Certificates” tab allows you to continue to import and use management certificates to manage Windows Azure resources as well.  We have not removed any functionality with today’s update – all of the existing scenarios that previously supported management certificates within Visual Studio continue to work just fine.  The new integrated sign-in support provided with today’s release is purely additive. Note: the SQL Database node and the Mobile Service node in Server Explorer do not support integrated sign-in at this time. Therefore, you will only see databases and mobile services under those nodes if you have a management certificate to authorize access to them.  We will enable them with integrated sign-in in a future update. Remote Debugging Cloud Resources within Visual Studio Today’s Windows Azure SDK 2.2 release adds support for remote debugging many types of Windows Azure resources. With live, remote debugging support from within Visual Studio, you are now able to have more visibility than ever before into how your code is operating live in Windows Azure.  Let’s walkthrough how to enable remote debugging for a Cloud Service: Remote Debugging of Cloud Services To enable remote debugging for your cloud service, select Debug as the Build Configuration on the Common Settings tab of your Cloud Service’s publish dialog wizard: Then click the Advanced Settings tab and check the Enable Remote Debugging for all roles checkbox: Once your cloud service is published and running live in the cloud, simply set a breakpoint in your local source code: Then use Visual Studio’s Server Explorer to select the Cloud Service instance deployed in the cloud, and then use the Attach Debugger context menu on the role or to a specific VM instance of it: Once the debugger attaches to the Cloud Service, and a breakpoint is hit, you’ll be able to use the rich debugging capabilities of Visual Studio to debug the cloud instance remotely, in real-time, and see exactly how your app is running in the cloud. Today’s remote debugging support is super powerful, and makes it much easier to develop and test applications for the cloud.  Support for remote debugging Cloud Services is available as of today, and we’ll also enable support for remote debugging Web Sites shortly. Firewall Management Support with SQL Databases By default we enable a security firewall around SQL Databases hosted within Windows Azure.  This ensures that only your application (or IP addresses you approve) can connect to them and helps make your infrastructure secure by default.  This is great for protection at runtime, but can sometimes be a pain at development time (since by default you can’t connect/manage the database remotely within Visual Studio if the security firewall blocks your instance of VS from connecting to it). One of the cool features we’ve added with today’s release is support that makes it easy to enable and configure the security firewall directly within Visual Studio.  Now with the SDK 2.2 release, when you try and connect to a SQL Database using the Visual Studio Server Explorer, and a firewall rule prevents access to the database from your machine, you will be prompted to add a firewall rule to enable access from your local IP address: You can simply click Add Firewall Rule and a new rule will be automatically added for you. In some cases, the logic to detect your local IP may not be sufficient (for example: you are behind a corporate firewall that uses a range of IP addresses) and you may need to set up a firewall rule for a range of IP addresses in order to gain access. The new Add Firewall Rule dialog also makes this easy to do.  Once connected you’ll be able to manage your SQL Database directly within the Visual Studio Server Explorer: This makes it much easier to work with databases in the cloud. Visual Studio 2013 RTM Virtual Machine Images Available for MSDN Subscribers Last week we released the General Availability Release of Visual Studio 2013 to the web.  This is an awesome release with a ton of new features. With today’s Windows Azure update we now have a set of pre-configured VM images of VS 2013 available within the Windows Azure Management Portal for use by MSDN customers.  This enables you to create a VM in the cloud with VS 2013 pre-installed on it in with only a few clicks: Windows Azure now provides the fastest and easiest way to get started doing development with Visual Studio 2013. Windows Azure Management Libraries for .NET (Preview) Having the ability to automate the creation, deployment, and tear down of resources is a key requirement for applications running in the cloud.  It also helps immensely when running dev/test scenarios and coded UI tests against pre-production environments. Today we are releasing a preview of a new set of Windows Azure Management Libraries for .NET.  These new libraries make it easy to automate tasks using any .NET language (e.g. C#, VB, F#, etc).  Previously this automation capability was only available through the Windows Azure PowerShell Cmdlets or to developers who were willing to write their own wrappers for the Windows Azure Service Management REST API. Modern .NET Developer Experience We’ve worked to design easy-to-understand .NET APIs that still map well to the underlying REST endpoints, making sure to use and expose the modern .NET functionality that developers expect today: Portable Class Library (PCL) support targeting applications built for any .NET Platform (no platform restriction) Shipped as a set of focused NuGet packages with minimal dependencies to simplify versioning Support async/await task based asynchrony (with easy sync overloads) Shared infrastructure for common error handling, tracing, configuration, HTTP pipeline manipulation, etc. Factored for easy testability and mocking Built on top of popular libraries like HttpClient and Json.NET Below is a list of a few of the management client classes that are shipping with today’s initial preview release: .NET Class Name Supports Operations for these Assets (and potentially more) ManagementClient Locations Credentials Subscriptions Certificates ComputeManagementClient Hosted Services Deployments Virtual Machines Virtual Machine Images & Disks StorageManagementClient Storage Accounts WebSiteManagementClient Web Sites Web Site Publish Profiles Usage Metrics Repositories VirtualNetworkManagementClient Networks Gateways Automating Creating a Virtual Machine using .NET Let’s walkthrough an example of how we can use the new Windows Azure Management Libraries for .NET to fully automate creating a Virtual Machine. I’m deliberately showing a scenario with a lot of custom options configured – including VHD image gallery enumeration, attaching data drives, network endpoints + firewall rules setup - to show off the full power and richness of what the new library provides. We’ll begin with some code that demonstrates how to enumerate through the built-in Windows images within the standard Windows Azure VM Gallery.  We’ll search for the first VM image that has the word “Windows” in it and use that as our base image to build the VM from.  We’ll then create a cloud service container in the West US region to host it within: We can then customize some options on it such as setting up a computer name, admin username/password, and hostname.  We’ll also open up a remote desktop (RDP) endpoint through its security firewall: We’ll then specify the VHD host and data drives that we want to mount on the Virtual Machine, and specify the size of the VM we want to run it in: Once everything has been set up the call to create the virtual machine is executed asynchronously In a few minutes we’ll then have a completely deployed VM running on Windows Azure with all of the settings (hard drives, VM size, machine name, username/password, network endpoints + firewall settings) fully configured and ready for us to use: Preview Availability via NuGet The Windows Azure Management Libraries for .NET are now available via NuGet. Because they are still in preview form, you’ll need to add the –IncludePrerelease switch when you go to retrieve the packages. The Package Manager Console screen shot below demonstrates how to get the entire set of libraries to manage your Windows Azure assets: You can also install them within your .NET projects by right clicking on the VS Solution Explorer and using the Manage NuGet Packages context menu command.  Make sure to select the “Include Prerelease” drop-down for them to show up, and then you can install the specific management libraries you need for your particular scenarios: Open Source License The new Windows Azure Management Libraries for .NET make it super easy to automate management operations within Windows Azure – whether they are for Virtual Machines, Cloud Services, Storage Accounts, Web Sites, and more.  Like the rest of the Windows Azure SDK, we are releasing the source code under an open source (Apache 2) license and it is hosted at https://github.com/WindowsAzure/azure-sdk-for-net/tree/master/libraries if you wish to contribute. PowerShell Enhancements and our New Script Center Today, we are also shipping Windows Azure PowerShell 0.7.0 (which is a separate download). You can find the full change log here. Here are some of the improvements provided with it: Windows Azure Active Directory authentication support Script Center providing many sample scripts to automate common tasks on Windows Azure New cmdlets for Media Services and SQL Database Script Center Windows Azure enables you to script and automate a lot of tasks using PowerShell.  People often ask for more pre-built samples of common scenarios so that they can use them to learn and tweak/customize. With this in mind, we are excited to introduce a new Script Center that we are launching for Windows Azure. You can learn about how to scripting with Windows Azure with a get started article. You can then find many sample scripts across different solutions, including infrastructure, data management, web, and more: All of the sample scripts are hosted on TechNet with links from the Windows Azure Script Center. Each script is complete with good code comments, detailed descriptions, and examples of usage. Summary Visual Studio 2013 and the Windows Azure SDK 2.2 make it easier than ever to get started developing rich cloud applications. Along with the Windows Azure Developer Center’s growing set of .NET developer resources to guide your development efforts, today’s Windows Azure SDK 2.2 release should make your development experience more enjoyable and efficient. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Cisco SR520w FE - WAN Port Stops Working

    - by Mike Hanley
    I have setup a Cisco SR520W and everything appears to be working. After about 1-2 days, it looks like the WAN port stops forwarding traffic to the Internet gateway IP of the device. If I unplug and then plug in the network cable connecting the WAN port of the SR520W to my Comcast Cable Modem, traffic startings flowing again. Also, if I restart the SR520W, the traffic will flow again. Any ideas? Here is the running config: Current configuration : 10559 bytes ! version 12.4 no service pad no service timestamps debug uptime service timestamps log datetime msec no service password-encryption ! hostname hostname.mydomain.com ! boot-start-marker boot-end-marker ! logging message-counter syslog no logging rate-limit enable secret 5 <removed> ! aaa new-model ! ! aaa authentication login default local aaa authorization exec default local ! ! aaa session-id common clock timezone PST -8 clock summer-time PDT recurring ! crypto pki trustpoint TP-self-signed-334750407 enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-334750407 revocation-check none rsakeypair TP-self-signed-334750407 ! ! crypto pki certificate chain TP-self-signed-334750407 certificate self-signed 01 <removed> quit dot11 syslog ! dot11 ssid <removed> vlan 75 authentication open authentication key-management wpa guest-mode wpa-psk ascii 0 <removed> ! ip source-route ! ! ip dhcp excluded-address 172.16.0.1 172.16.0.10 ! ip dhcp pool inside import all network 172.16.0.0 255.240.0.0 default-router 172.16.0.1 dns-server 10.0.0.15 10.0.0.12 domain-name mydomain.com ! ! ip cef ip domain name mydomain.com ip name-server 68.87.76.178 ip name-server 66.240.48.9 ip port-map user-ezvpn-remote port udp 10000 ip ips notify SDEE ip ips name sdm_ips_rule ! ip ips signature-category category all retired true category ios_ips basic retired false ! ip inspect log drop-pkt no ipv6 cef ! multilink bundle-name authenticated parameter-map type inspect z1-z2-pmap audit-trail on password encryption aes ! ! username admin privilege 15 secret 5 <removed> ! crypto key pubkey-chain rsa named-key realm-cisco.pub key-string <removed> quit ! ! ! ! ! ! crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 connect auto group EZVPN_GROUP_1 key <removed> mode client peer 64.1.208.90 virtual-interface 1 username admin password <removed> xauth userid mode local ! ! archive log config logging enable logging size 600 hidekeys ! ! ! class-map type inspect match-any SDM_AH match access-group name SDM_AH class-map type inspect match-any SDM-Voice-permit match protocol sip class-map type inspect match-any SDM_ESP match access-group name SDM_ESP class-map type inspect match-any SDM_EASY_VPN_REMOTE_TRAFFIC match protocol isakmp match protocol ipsec-msft match class-map SDM_AH match class-map SDM_ESP match protocol user-ezvpn-remote class-map type inspect match-all SDM_EASY_VPN_REMOTE_PT match class-map SDM_EASY_VPN_REMOTE_TRAFFIC match access-group 101 class-map type inspect match-any Easy_VPN_Remote_VT match access-group 102 class-map type inspect match-any sdm-cls-icmp-access match protocol icmp match protocol tcp match protocol udp class-map type inspect match-any sdm-cls-insp-traffic match protocol cuseeme match protocol dns match protocol ftp match protocol h323 match protocol https match protocol icmp match protocol imap match protocol pop3 match protocol netshow match protocol shell match protocol realmedia match protocol rtsp match protocol smtp extended match protocol sql-net match protocol streamworks match protocol tftp match protocol vdolive match protocol tcp match protocol udp class-map type inspect match-any L4-inspect-class match protocol icmp class-map type inspect match-all sdm-invalid-src match access-group 100 class-map type inspect match-all dhcp_out_self match access-group name dhcp-resp-permit class-map type inspect match-all dhcp_self_out match access-group name dhcp-req-permit class-map type inspect match-all sdm-protocol-http match protocol http ! ! policy-map type inspect sdm-permit-icmpreply class type inspect dhcp_self_out pass class type inspect sdm-cls-icmp-access inspect class class-default pass policy-map type inspect sdm-permit_VT class type inspect Easy_VPN_Remote_VT pass class class-default drop policy-map type inspect sdm-inspect class type inspect SDM-Voice-permit pass class type inspect sdm-cls-insp-traffic inspect class type inspect sdm-invalid-src drop log class type inspect sdm-protocol-http inspect z1-z2-pmap class class-default pass policy-map type inspect sdm-inspect-voip-in class type inspect SDM-Voice-permit pass class class-default drop policy-map type inspect sdm-permit class type inspect SDM_EASY_VPN_REMOTE_PT pass class type inspect dhcp_out_self pass class class-default drop ! zone security ezvpn-zone zone security out-zone zone security in-zone zone-pair security sdm-zp-in-ezvpn1 source in-zone destination ezvpn-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-out-ezpn1 source out-zone destination ezvpn-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-ezvpn-out1 source ezvpn-zone destination out-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-self-out source self destination out-zone service-policy type inspect sdm-permit-icmpreply zone-pair security sdm-zp-out-in source out-zone destination in-zone service-policy type inspect sdm-inspect-voip-in zone-pair security sdm-zp-ezvpn-in1 source ezvpn-zone destination in-zone service-policy type inspect sdm-permit_VT zone-pair security sdm-zp-out-self source out-zone destination self service-policy type inspect sdm-permit zone-pair security sdm-zp-in-out source in-zone destination out-zone service-policy type inspect sdm-inspect ! bridge irb ! ! interface FastEthernet0 switchport access vlan 75 ! interface FastEthernet1 switchport access vlan 75 ! interface FastEthernet2 switchport access vlan 75 ! interface FastEthernet3 switchport access vlan 75 ! interface FastEthernet4 description $FW_OUTSIDE$ ip address 75.149.48.76 255.255.255.240 ip nat outside ip ips sdm_ips_rule out ip virtual-reassembly zone-member security out-zone duplex auto speed auto crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 ! interface Virtual-Template1 type tunnel no ip address ip virtual-reassembly zone-member security ezvpn-zone tunnel mode ipsec ipv4 ! interface Dot11Radio0 no ip address ! encryption vlan 75 mode ciphers aes-ccm ! ssid <removed> ! speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Dot11Radio0.75 encapsulation dot1Q 75 native ip virtual-reassembly bridge-group 75 bridge-group 75 subscriber-loop-control bridge-group 75 spanning-disabled bridge-group 75 block-unknown-source no bridge-group 75 source-learning no bridge-group 75 unicast-flooding ! interface Vlan1 no ip address ip virtual-reassembly bridge-group 1 ! interface Vlan75 no ip address ip virtual-reassembly bridge-group 75 bridge-group 75 spanning-disabled ! interface BVI1 no ip address ip nat inside ip virtual-reassembly ! interface BVI75 description $FW_INSIDE$ ip address 172.16.0.1 255.240.0.0 ip nat inside ip ips sdm_ips_rule in ip virtual-reassembly zone-member security in-zone crypto ipsec client ezvpn EZVPN_REMOTE_CONNECTION_1 inside ! ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 75.149.48.78 2 ! ip http server ip http authentication local ip http secure-server ip http timeout-policy idle 60 life 86400 requests 10000 ip nat inside source list 1 interface FastEthernet4 overload ! ip access-list extended SDM_AH remark SDM_ACL Category=1 permit ahp any any ip access-list extended SDM_ESP remark SDM_ACL Category=1 permit esp any any ip access-list extended dhcp-req-permit remark SDM_ACL Category=1 permit udp any eq bootpc any eq bootps ip access-list extended dhcp-resp-permit remark SDM_ACL Category=1 permit udp any eq bootps any eq bootpc ! access-list 1 remark SDM_ACL Category=2 access-list 1 permit 172.16.0.0 0.15.255.255 access-list 100 remark SDM_ACL Category=128 access-list 100 permit ip host 255.255.255.255 any access-list 100 permit ip 127.0.0.0 0.255.255.255 any access-list 100 permit ip 75.149.48.64 0.0.0.15 any access-list 101 remark SDM_ACL Category=128 access-list 101 permit ip host 64.1.208.90 any access-list 102 remark SDM_ACL Category=1 access-list 102 permit ip any any ! ! ! ! snmp-server community <removed> RO ! control-plane ! bridge 1 protocol ieee bridge 1 route ip bridge 75 route ip banner login ^CSR520 Base Config - MFG 1.0 ^C ! line con 0 no modem enable line aux 0 line vty 0 4 transport input telnet ssh ! scheduler max-task-time 5000 end I also ran some diagnostics when the WAN port stopped working: 1. show interface fa4 FastEthernet4 is up, line protocol is up Hardware is PQUICC_FEC, address is 0026.99c5.b434 (bia 0026.99c5.b434) Description: $FW_OUTSIDE$ Internet address is 75.149.48.76/28 MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 100Mb/s, 100BaseTX/FX ARP type: ARPA, ARP Timeout 04:00:00 Last input 01:08:15, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/23/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 1000 bits/sec, 0 packets/sec 336446 packets input, 455403158 bytes Received 23 broadcasts, 0 runts, 0 giants, 37 throttles 41 input errors, 0 CRC, 0 frame, 0 overrun, 41 ignored 0 watchdog 0 input packets with dribble condition detected 172529 packets output, 23580132 bytes, 0 underruns 0 output errors, 0 collisions, 2 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier 0 output buffer failures, 0 output buffers swapped out 2. show ip route Gateway of last resort is 75.149.48.78 to network 0.0.0.0 C 192.168.75.0/24 is directly connected, BVI75 64.0.0.0/32 is subnetted, 1 subnets S 64.1.208.90 [1/0] via 75.149.48.78 S 192.168.10.0/24 is directly connected, BVI75 75.0.0.0/28 is subnetted, 1 subnets C 75.149.48.64 is directly connected, FastEthernet4 S* 0.0.0.0/0 [2/0] via 75.149.48.78 3. show ip arp Protocol Address Age (min) Hardware Addr Type Interface Internet 75.149.48.65 69 001e.2a39.7b08 ARPA FastEthernet4 Internet 75.149.48.76 - 0026.99c5.b434 ARPA FastEthernet4 Internet 75.149.48.78 93 0022.2d6c.ae36 ARPA FastEthernet4 Internet 192.168.75.1 - 0027.0d58.f5f0 ARPA BVI75 Internet 192.168.75.12 50 7c6d.62c7.8c0a ARPA BVI75 Internet 192.168.75.13 0 001b.6301.1227 ARPA BVI75 4. sh ip cef Prefix Next Hop Interface 0.0.0.0/0 75.149.48.78 FastEthernet4 0.0.0.0/8 drop 0.0.0.0/32 receive 64.1.208.90/32 75.149.48.78 FastEthernet4 75.149.48.64/28 attached FastEthernet4 75.149.48.64/32 receive FastEthernet4 75.149.48.65/32 attached FastEthernet4 75.149.48.76/32 receive FastEthernet4 75.149.48.78/32 attached FastEthernet4 75.149.48.79/32 receive FastEthernet4 127.0.0.0/8 drop 192.168.10.0/24 attached BVI75 192.168.75.0/24 attached BVI75 192.168.75.0/32 receive BVI75 192.168.75.1/32 receive BVI75 192.168.75.12/32 attached BVI75 192.168.75.13/32 attached BVI75 192.168.75.255/32 receive BVI75 224.0.0.0/4 drop 224.0.0.0/24 receive 240.0.0.0/4 drop 255.255.255.255/32 receive Thanks in advance, -Mike

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • How to configure fastcgi to work with ligttpd in ubuntu

    - by michael
    I am able to run lighttpd on ubuntu 9.10. But when i tried to setup fastcgi with lighttpd by putting this in the ligttpd.conf file: #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => "9098", "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", "docroot" => "/" # remote server may use # it's own docroot )) ) This is what I get in the error.log in ligttpd: 2010-03-07 21:00:11: (log.c.166) server started 2010-03-07 21:00:11: (mod_fastcgi.c.1104) the fastcgi-backend /usr/local/bin/cgi-fcgi failed to start: 2010-03-07 21:00:11: (mod_fastcgi.c.1108) child exited with status 1 /usr/local/bin/cgi-fcgi 2010-03-07 21:00:11: (mod_fastcgi.c.1111) If you're trying to run your app as a FastCGI backend, make sure you're using the FastCGI-enabled version. If this is PHP on Gentoo, add 'fastcgi' to the USE flags. 2010-03-07 21:00:11: (mod_fastcgi.c.1399) [ERROR]: spawning fcgi failed. 2010-03-07 21:00:11: (server.c.931) Configuration of plugins failed. Going down. I do have cgi-fcgi in /usr/local/bin: $ which cgi-fcgi /usr/local/bin/cgi-fcgi '/usr/local/bin/cgi-fcgi' is the executable after I download and compile fast-cgi. Here is my lighttpd conf file: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you for your help.

    Read the article

  • ServerRoot in my lighttpd.conf

    - by michael
    Hi, I have use the following example lighttpd.conf to launch my lighttpd. Can you please tell me where is my 'ServerRoot'? # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) server.port = 9090 ## bind to localhost (default: all interfaces) server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module fastcgi.server = ( "/fastcgi_scripts/" => (( "host" => "127.0.0.1", "port" => 1026, "check-local" => "disable", "bin-path" => "/usr/local/bin/cgi-fcgi", #"docroot" => "/" # remote server may use # it's own docroot )) ) ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.socket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 Thank you.

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • Microsoft hosting free Hyper-V training for VMware Pros

    - by Ryan Roussel
    Microsoft will be hosting free training for virtualization professionals focused on Hyper-V, System Center, and virtualization architecture.  Details are below:   Just one week after Microsoft Management Summit 2011 (MMS), Microsoft Learning will be hosting an exclusive three-day Jump Start class specially tailored for VMware and Microsoft virtualization technology pros.  Registration for “Microsoft Virtualization for VMware Professionals” is open now and will be delivered as an online class on March 29-31, 2010 from 10:00am-4:00pm PDT.    The course is COMPLETELY FREE and OPEN TO ANYONE!  Please share with your customers, blog, Tweet, etc. – help us get the word out to strengthen support for Microsoft’s virtualization offerings. What’s the high-level overview? This cutting edge course will feature expert instruction and real-world demonstrations of Hyper-V and brand new releases from System Center Virtual Machine Manager 2012 Beta (many of which will be announced just one week earlier at MMS).  Register Now!   Day 1 will focus on “Platform” (Hyper-V, virtualization architecture, high availability & clustering) 10:00am – 10:30pm PDT:  Virtualization 360 Overview 10:30am – 12:00pm:  Microsoft Hyper-V Deployment Options & Architecture 1:00pm – 2:00pm:  Differentiating Microsoft and VMware (terminology, etc.) 2:00pm – 4:00pm:  High Availability & Clustering Day 2 will focus on “Management” (System Center Suite, SCVMM 2012 Beta, Opalis, Private Cloud solutions) 10:00am – 11:00pm PDT:  System Center Suite Overview w/ focus on DPM 11:00am – 12:00pm:  Virtual Machine Manager 2012 | Part 1 1:00pm –   1:30pm:  Virtual Machine Manager 2012 | Part 2 1:30pm – 2:30pm:  Automation with System Center Opalis & PowerShell 2:30pm – 4:00pm:  Private Cloud Solutions, Architecture & VMM SSP 2.0 Day 3 will focus on “VDI” (VDI Infrastructure/architecture, v-Alliance, application delivery via VDI) 10:00am – 11:00pm PDT:  Virtual Desktop Infrastructure (VDI) Architecture | Part 1 11:00am – 12:00pm:  Virtual Desktop Infrastructure (VDI) Architecture | Part 2 1:00pm – 2:30pm:  v-Alliance Solution Overview 2:30pm – 4:00pm:  Application Delivery for VDI     Every section will be team-taught by two of the most respected authorities on virtualization technologies: Microsoft Technical Evangelist Symon Perriman and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes Who is the target audience for this training? Suggested prerequisite skills include real-world experience with Windows Server 2008 R2, virtualization and datacenter management. The course is tailored to these types of roles: · IT Professional · IT Decision Maker · Network Administrators & Architects · Storage/Infrastructure Administrators & Architects How do I to register and learn more about this great training opportunity? · Register: Visit the Registration Page and sign up for all three sessions · Blog: Learn more from the Microsoft Learning Blog · Twitter: Here are a few posts you can retweet: o Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V @MSLearning #Hyper-V o @SysCtrOpalis Mar. 29-31 "Microsoft #Virtualization for VMware Pros" @SymonPerriman Corey Hynes http://bit.ly/JS-Hyper-V #Hyper-V o Learn all the cool new features in Hyper-V & System Center 2012! SCVMM, Self-Service Portal 2.0, http://bit.ly/JS-Hyper-V #Hyper-V #Opalis What is a “Jump Start” course? A “Jump Start” course is “team-taught” by two expert instructors in an engaging radio talk show style format. The idea is to deliver readiness training on strategic and emerging technologies that drive awareness at scale before Microsoft Learning develops mainstream Microsoft Official Courses (MOC) that map to certifications.  All sessions are professionally recorded and distributed through MS Showcase, Channel 9, Zune Marketplace and iTunes for broader reach.

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • How to get a safecontrol entry into manifest.xml with WSPBuilder project

    - by andrew
    Upon taking the default sharepoint master page for MySite, making some changes, and making a wsp out of it with WSPBuilder, I come to these errors in my logs: http://spoint/MySite/%5Fcatalogs/masterpage/MySite.master - An unexpected error has been encountered in this Web Part. Error: The control with virtual path '_controltemplates/Welcome.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite'., Source: [UnsafeControlException: The control with virtual path '_controltemplates/Welcome.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite' (stack trace omitted) http://spoint/MySite/%5Fcatalogs/masterpage/MySite.master - An unexpected error has been encountered in this Web Part. Error: The control with virtual path '_controltemplates/DesignModeConsole.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite'., Source: [UnsafeControlException: The control with virtual path '_controltemplates/DesignModeConsole.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite' (stack trace ommited) So, this masterpage does in fact use these OOTB controls and so I guess I need to get them safecontrolled. And I guess I want to do this via the manifest.xml. But I do not see how to make WSPBuilder do this.

    Read the article

  • How to fix a NHibernate lazy loading error "no session or session was closed"?

    - by MCardinale
    I'm developing a website with ASP.NET MVC, NHibernate and Fluent Hibernate and getting the error "no session or session was closed" when I try to access a child object. These are my domain classes: public class ImageGallery { public virtual int Id { get; set; } public virtual string Title { get; set; } public virtual IList<Image> Images { get; set; } } public class Image { public virtual int Id { get; set; } public virtual ImageGallery ImageGallery { get; set; } public virtual string File { get; set; } } These are my maps: public class ImageGalleryMap:ClassMap<ImageGallery> { public ImageGalleryMap() { Id(x => x.Id); Map(x => x.Title); HasMany(x => x.Images); } } public class ImageMap:ClassMap<Image> { public ImageMap() { Id(x => x.Id); References(x => x.ImageGallery); Map(x => x.File); } } And this is my Session Factory helper class: public class NHibernateSessionFactory { private static ISessionFactory _sessionFactory; private static ISessionFactory SessionFactory { get { if(_sessionFactory == null) { _sessionFactory = Fluently.Configure() .Database(MySQLConfiguration.Standard.ConnectionString(MyConnString)) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<ImageGalleryMap>()) .ExposeConfiguration(c => c.Properties.Add("hbm2ddl.keywords", "none")) .BuildSessionFactory(); } return _sessionFactory; } } public static ISession OpenSession() { return SessionFactory.OpenSession(); } } Everything works fine, when I get ImageGallery from database using this code: IImageGalleryRepository igr = new ImageGalleryRepository(); ImageGallery ig = igr.GetById(1); But, when I try to access the Image child object with this code string imageFile = ig.Images[1].File; I get this error: Initializing[Entities.ImageGallery#1]-failed to lazily initialize a collection of role: Entities.ImageGallery.Images, no session or session was closed Someone know how can I fix this? Thank you very much!

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >