Search Results

Search found 15383 results on 616 pages for 'home automation'.

Page 382/616 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • FileZilla Server Configuration Problems

    - by LiamB
    I've set-up FileZilla server a Windows 2008 Machine, I then created the user, password and added a share folder which I set to Home Directory. I then connect to the server from the client computer Status: Connecting to {IP} Status: Connection established, waiting for welcome message... Response: 220-Welcome To {NAME} FTP Response: 220 {DOMAIN} Command: USER {USER} Response: 331 Password required for {USER} Command: PASS ********* Response: 230 Logged on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode ({}DATA) Command: MLSD The connection works fine, however no remote directory is selected, it shows as "/" however uploading any file fails. Any suggestions on how to debug this more?

    Read the article

  • Fedora vs Ubuntu to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • Why can't I change the domain when connecting to network shares in Windows 8?

    - by Nathan Osman
    I have a computer running Windows 8.1 Pro. I also have Ubuntu 12.04.3 Server running within Hyper-V on the same machine. The Ubuntu server has Samba installed. Both the host and guest OS are connected to the workgroup WORKGROUP. When I open up "Network" in Windows 8.1 and attempt to connect to the server (both by name and IP address), I receive the following prompt: Entering the correct username and password for an account on the server fails. /etc/samba/smb.conf has the following share definitions set: [homes] comment = Home Directories browseable = no

    Read the article

  • Recovering drivers from previous installation?

    - by Walkerneo
    Yesterday I bought a new computer and I've been setting it up since this morning. The hard drive came with windows 7 home premium (x64) installed, but I decided to instead install windows 7 ultimate (x64) - this is what I'm currently using. Unfortunately, there were some drivers that were installed that I need in this installation. I only notice because the computer doesn't have the ethernet drivers, so I'm only able to connect to the internet via wireless. There are other drivers missing as well, but I'm not yet seeing the effects. I still have the Windows.old folder with the previous installation, which has the drivers. Is there any way to copy the necessary ones over? I tried the options for updating driver software through device manager with the path set to System32\drivers of the old installation, but it didn't find anything.

    Read the article

  • Amazon ec2 folder missing

    - by CQM
    To set permissions on the settings file On your Amazon EC2 instance, at a command prompt, use the following command to set permissions: sudo chmod 666 /var/www/html/sites/default/settings.php except I don't have a www folder in my instance [ec2-user@ip-10-242-118-215 ~]$ cd / [ec2-user@ip-10-242-118-215 /]$ ls bin cgroup etc lib local media opt root selinux sys usr boot dev home lib64 lost+found mnt proc sbin srv tmp var [ec2-user@ip-10-242-118-215 /]$ cd var [ec2-user@ip-10-242-118-215 var]$ ls account db games local log nis preserve run tmp cache empty lib lock mail opt racoon spool yp Please advise, did I forget to install something that the amazon instructions assumed I knew about? Running 64bit Amazon linux ami march 2012 I feel like the webserver is missing?

    Read the article

  • Why would ssh fail to expand %h variable in .ssh/config?

    - by Zoran
    Why would ssh fail to expand %h from .ssh/config? This used to work and still works except on a RHEL box. Looking for what the origin of this could be. Is there a setting somewhere that tells ssh to not expand %h? I have something like this in my .ssh/config: Host *.foo HostName %h.mydomain.com On the RHEL box where this doesn't work, I get this: $ ssh -vvvv bar.foo OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /home/zsimic/.ssh/config debug1: Applying options for *.foo debug1: Applying options for *.foo debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 ssh: Could not resolve hostname %h.mydomain.com: Name or service not known

    Read the article

  • Is it Secure to Grant Apachie User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • Why doesn’t windows explorer show my removable / USB drive even though the command prompt does ?

    - by Gishu
    I'm running WinXP SP2. Around 50% of the time, when I slot in my USB drive. Windows explorer refuses to show the drive. If I click on the Safely remove hardware icon on the tray, I can see a menu item for the drive - say drive G: (the light on the USB drive is also on) If I type in G: into the address bar of explorer, it says 'Cannot find...' If I type in G: into a command prompt window, it works and I can do a dir to see the list of directories on the drive. To fix this, I've to remove-reinsert the pen-drive. But doing it every day is annoying. Also this happens only on this machine.. I use this drive on my home machine and it works flawlessly each time. Can anyone suggest things that I could try ? Thanks for reading...

    Read the article

  • LAMP stack security question - uploading files to server

    - by morpheous
    I am running Ubuntu 9.10 desktop on my home machine. I need to upload files from my local machine, to my web server, on a periodic basis. My server is running Ubuntu Server LTS. I want my server to be secure, and only run the LAMP stack and possibly, an email server. I do not (ideally) want to have FTP or anything that can allow (more) knowledgeable hackers to be able to hack into my server. Can anyone recommend how I may send files from my local machine to the server? This may seem an easy/trivial question, but I am relatively new to Linux - and I got my previous Windows server machine serious hacked in the past, hence the move to Linux, and thats why I am so security conscious.

    Read the article

  • What happens to modern-ui apps when they aren't in the foreground?

    - by Mark Allen
    If I start a modern-ui app and then switch to a different app or a normal program running on the desktop, what happens to the first app? I've heard something about the first app being suspended, but realized that I don't really know that for certain. I mean, could you write a SETI@Home (BOINC) app if you wanted to, or will apps that aren't in the foreground always be suspended? Can you change that? I could see changing that based on available resources, running from AC vs. battery, etc. This morning I heard about an iPad being recovered thanks to the "Find my iPad" app, and was wondering whether you could write such a thing as a modern-ui app and have it work without being the running foreground app. (I'm aware that you'd just write a Windows service or similar, that's not what I'm asking about.)

    Read the article

  • [MINI HOW-TO] Disable Third Party Extensions in Internet Explorer

    - by Mysticgeek
    Are you looking for a way to make browsing to sites you’re not sure of in Internet Explorer a bit more secure? Here we take a quick look at how to disable third-party extensions in IE. Open up Internet Explorer and click on Tools then select Internet Options… Under Internet Properties click on the Advanced tab and under Settings scroll down and uncheck Enable third-party browser extensions and click Ok. Now restart IE and the extensions will be disabled. You can then re-enable them when you know a site can be trusted and you want to be able to use its services. This will help avoid malware when you visit a site that wants to install an extension and you’re not sure about it. Similar Articles Productive Geek Tips Block Third-Party Cookies in IE7Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPRemove PartyPoker (Or Other Items) from the Internet Explorer Tools MenuDisable Tabbed Browsing in Internet Explorer 7Make Ctrl+Tab in Internet Explorer 7 Use Most Recent Order TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • Fix Icon Display Problems by Rebuilding the Windows 7 Thumbnail Cache

    - by The Geek
    Have you ever been browsing through photos or videos on your PC, and noticed that the thumbnails weren’t showing up properly? Sometimes they get corrupted, and you can quickly rebuild them to fix the problem. Just for some background, let’s walk through what we’re talking about. Normally, when you’re browsing around your files, you’ll see thumbnails for pictures and videos that you are viewing. These thumbnails are all generated, and stored in a cache to make browsing files faster. But sometimes… the cache gets corrupted, and we need to rebuild them. Here’s an example of what happens when it goes out of whack… Rebuilding the Windows 7 Thumbnail Cache All you have to do is open up Disk Cleanup through the start menu search box (just type in disk cleanup to find it) Just make sure that Thumbnails is checked, and then click OK to run through the cleaning process. Similar Articles Productive Geek Tips Prevent Windows XP from Creating the Thumbs.db Thumbnail Cache FilesFixing When All Thumbnail Icons Show the Same or Wrong ImageGet Vista Taskbar Thumbnail Previews in Windows XPIncrease the size of Taskbar Preview Thumbnails in Windows 7Change SuperFetch to Only Cache System Boot Files in Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams

    Read the article

  • How to Upload Really Large Files to SkyDrive, Dropbox, or Email

    - by Matthew Guay
    Do you need to upload a very large file to store online or email to a friend? Unfortunately, whether you’re emailing a file or using online storage sites like SkyDrive, there’s a limit on the size of files you can use. Here’s how to get around the limits. Skydrive only lets you add files up to 50 MB, and while the Dropbox desktop client lets you add really large files, the web interface has a 300 MB limit, so if you were on another PC and wanted to add giant files to your Dropbox, you’d need to split them. This same technique also works for any file sharing service—even if you were sending files through email. There’s two ways that you can get around the limits—first, by just compressing the files if you’re close to the limit, but the second and more interesting way is to split up the files into smaller chunks. Keep reading for how to do both. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Use Your Chart-Drawing Skills to Win a Free Chrome Cr-48 Notebook

    - by ETC
    Today Google announced that they are partnering with a number of Chrome web application developers to distribute a number of their Chrome OS Notebooks to lucky fans. That’s when we noticed something interesting that can greatly increase your odds of getting one. Unlike Box, MOG, and Zoho, who are doing random giveaways, the LucidChart giveaway is based on a contest of skill – they are picking the best drawings using their flowchart tool and giving away Chrome Notebooks to the winners. So all you have to do is create one of the most interesting drawings / charts, and you will get your hands on one. We’ve also confirmed this with the fine people at LucidChart, who told us “any user who spends a bit of time and effort to do something creative has a good shot at winning one.” How great is the Chrome Cr-48 Notebook? What’s it all about? We wouldn’t know, since Google hasn’t given us here at How-To Geek an opportunity to use one, despite our attempts. It’s sad, since we’re huge fans of the Chrome browser, that we can’t share our Chrome notebook experiences with hundreds of thousands of daily subscribers and millions of monthly visitors. Hint. Hint. Win a Chrome Cr-48 notebook from LucidChart [LucidChart] Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

  • Connect to SVN repository with Netbeans using SVN+SSH

    - by shuby_rocks
    Hello all, I am trying to connect to a SVN server in order to import my project into it with svn+ssh authentication method. I am using the NetBeans IDE (6.8) with subversion plugin installed on Windows XP SP2. I have plink installed with its path set in the Windows PATH env variable. When I use the similar looking repository URL (XXXX and YYYY replaced with sensible things) svn+ssh://XXXX@YYYY/home/dce/svn/trunk along with this external tunnel command plink -l <myUserName> -i C:\\privateKey.ppk I keep getting this error: org.tigris.subversion.javahl.ClientException: Network connection closed unexpectedly I searched about it on the Internet and tried many things but didn't work out. Please help if anybody has some idea what may be going wrong. Thanks a lot in advance.

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

  • RDP and New Accounts

    - by leeand00
    I created a new user account on the domain and added them to the Remote Desktop Users group. I could login just fine locally, but when I logged in remotely I was basically told that I could not login from there using that user. I could login just fine as the administrator or anybody else other than that new account. So I researched it a bit more and found that my setting looked like this on the local machine: So I changed it to Allow connections only from computers running Remote Desktop with Network Level Authentication (NLA). Now when I tried this down at my office I connected with RDP just fine on another computer. But low and behold when I got home and simply try to connect to the machine, I get the message: There has to be some kind of in between setting, or additional setting that I need to change on the user that allows me to connect directly via remote desktop over the VPN. At the moment I can connect by connecting to another computer on the network and then RDPing from there into my machine, but this is not ideal.

    Read the article

  • [MINI HOW-TO] Update Your Zune Player Software

    - by Mysticgeek
    Keeping your computer and software up to date is very important in keeping everything running smooth and secure. It’s also important to keep your geeky gadgets updated as well. Here we take a look at updating a Zune HD. Note: In this example we’re updating a Zune HD out of the box which hasn’t been updated yet. The first thing you’ll need to do if you haven’t already is download and install the latest Zune software (link below). Now plug your Zune into your computer with the included USB connection cable and give it a moment to be recognized. Next launch the Zune Desktop software and you should get the following screen. Just accept the EULA… Then the update kicks off. Make sure not to disconnect the Zune while the update takes place. The update will take a few minutes, and after it’s complete you should be good to go and can start using your Zune. To update your player in the future, go to General Settings then Player Update. Just like your computer’s hardware and software, you want to keep your other geeky gadgets updated as well. This will help the device run more smoothly, and sometimes add additional functionality. Download Zune 4.0 Similar Articles Productive Geek Tips Switch Ubuntu Server to use Internet Repositories Instead of cdromGreat Sounding Music and Skin Possibilities with XionExaile 0.3.0 is a Music Player for UbuntuMake VLC Player Look like Windows Media Player 11Unofficial Windows XP Themes Created by Microsoft TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Increase the size of Taskbar Previews (Win 7) Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox)

    Read the article

  • copSHH how to restrict user from going back from there main root

    - by minus4
    I have installed SFTP on a windows servers using copSSH and all is good and it works well however you can go back from the main root. For example when i use C:\copSSH\home{username} as that user i can go back into copSSH and into them directories too. And I have a user setup to actually be C:\inetpub\wwwroot but that user can go into the system and everything i have this set as my path /cygdrive/c/inetpub/wwwroot It would be ideal if the user could only go forward from the start directory, rather than out and about there is no write ability but there is read and download....... thanks

    Read the article

  • Linux Flash Player with 2 Monitors: always full-screen on primary monitor

    - by CarlF
    My setup at home uses a laptop, with a larger external monitor in addition to the built-in LCD panel, which is primary. I can see the larger monitor from the rest of the room and use it as my TV, for playing DVDs and various types of web video. However, it isn't ideal for Flash video. For instance, if I watch a video from Hulu or any other Flash-based site, I can expand it to full-screen mode. However, no matter which monitor the browser window is on, the full-screen mode is always on the laptop LCD panel, which is both too small and not visible from most of the room. Does anyone know of a way to force the Flash video to play full-screen on the monitor I select instead of the primary? My video chipset is NVidia, using kernel 2.6.31 (Ubuntu). Thanks.

    Read the article

  • Setting up a chroot sftp on debian server

    - by Kevin Duke
    I'm trying to allow a user "user" to access my server by either sftp or ssh. I want to jail them into a directory with chroot. I read the instructions here however it does not work. I did the following: useradd user modify /etc/ssh/sshd_config and added Match User user ForceCommand internal-sftp ChrootDirectory /home/duke/aa/smart to the bottom of the file changed the subsystem line to Subsystem sftp internal-sftp restarted sshd with /etc/init.d/ssh restart logged in with ssh as user "user" with PuTTY Putty says "Server unexpectly closed the connection". Why is this and how can it be fixed? EDIT Following the suggestions below, I've made the bottom of sshd_config look like: Match User user ChrootDirectory /tmp yet no change. I do get a password OK but I cannot connect via ssh nor sftp. What gives?

    Read the article

  • Architect Day Artifacts

    - by Bob Rhubart
    In the last eight days the Oracle Technology Network Architect Day tour has stopped in Dallas, Anaheim (Disneyland, to be precise) , and at Oracle HQ in Redwood Shores,  CA. I was on-scene for the Dallas event, where I pulled a TMZ-style ambush on Chris Benedict from the Oracle Enterprise Solutions Group to capture this short video.     The other presenters escaped. But the slide decks from several of the presentations are now available on Slideshare:  IT Optimization: Reduce Data Center Costs and Set the Foundation for Future Growth as presented by Alan Levine, Oracle Enterprise Architect Senior Director Implementing Applications with SOA and Application Integration Architecture as presented by Vish Gaitonde, Director, Ecosystem Strategy, Application Integration Architecture Application Grid: Platform for Virtualization and Consolidation of Your Java Applications as presented by Sam Shah, Director, SOA and Integration, Oracle Enterprise Solutions Group Infrastructure Consolidation and Virtualization as presented by Steve Bennett, also a Director with the Oracle Enterprise Solutions Group Security in a Cloudy Architecture as presented by Geri Born, Security Specialist with the Oracle Enterprise Solutions Group I'll post more Architect Day presentations as soon as I track them down. A special thank you to Oracle ACE Directors Jordan Braunstein, Billy Tong, and Kai Yu, who were on hand in Dallas, and to fellow ACE Directors Basheer Khan and Floyd Teter for their participation in the Anaheim event.  (Floyd and his iPad came through again, allowing me to record the Anaheim panel discussion via Skype while sitting in my home office in Cleveland.) That audio, as well as audio from the panel discussion and a roundtable from the Dallas event, will be available soon as ArchBeat podcast programs. If you attended one of these events, a big thanks. Your active participation, your questions and input, are what these events are all about.  As new cities are added to the tour, we expect more of the same from the OTN architect community. And did I mention that the food is free? So stay tuned... del.icio.us Tags: oracle,otn,enterprise architecture,enterprise architect,archbeat,arch2arch,architect day Technorati Tags: oracle,otn,enterprise architecture,enterprise architect,archbeat,arch2arch,architect day   Cross-posted to the ArchBeat blog

    Read the article

  • Architect Day Artifacts

    - by Bob Rhubart
    In the last eight days the Oracle Technology Network Architect Day tour has stopped in Dallas,  Anaheim (Disneyland, to be precise) , and at Oracle HQ in Redwood Shores,  CA. I was on-scene for the Dallas event, where I pulled a TMZ-style ambush on Chris Benedict from the Oracle Enterprise Solutions Group to capture this short video.     The other presenters escaped. But the slide decks from several of the presentations are now available on Slideshare:  IT Optimization: Reduce Data Center Costs and Set the Foundation for Future Growth as presented by Alan Levine, Oracle Enterprise Architect Senior Director Implementing Applications with SOA and Application Integration Architecture as presented by Vish Gaitonde, Director, Ecosystem Strategy, Application Integration Architecture Application Grid: Platform for Virtualization and Consolidation of Your Java Applications as presented by Sam Shah, Director, SOA and Integration, Oracle Enterprise Solutions Group Infrastructure Consolidation and Virtualization as presented by Steve Bennett, also a Director with the Oracle Enterprise Solutions Group Security in a Cloudy Architecture as presented by Geri Born, Security Specialist with the Oracle Enterprise Solutions Group I’ll post more Architect Day presentations as soon as I track them down. A special thank you to Oracle ACE Directors Jordan Braunstein, Billy Tong, and Kai Yu, who were on hand in Dallas, and to fellow ACE Directors Basheer Khan and Floyd Teter for their participation in the Anaheim event.  (Floyd and his iPad came through again, allowing me to record the Anaheim panel discussion via Skype while sitting in my home office in Cleveland.) That audio, as well as audio from the panel discussion and a roundtable from the Dallas event, will be available soon as ArchBeat podcast programs. If you attended one of these events, a big thanks. Your active participation, your questions and input, are what these events are all about.  As new cities are added to the tour, we expect more of the same from the OTN architect community. And did I mention that the food is free? So stay tuned… del.icio.us Tags: oracle,otn,enterprise architecture,enterprise architect,archbeat,arch2arch,architect day Technorati Tags: oracle,otn,enterprise architecture,enterprise architect,archbeat,arch2arch,architect day   Cross-posted to the Oracle Technology Network Blog

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >