Search Results

Search found 4575 results on 183 pages for 'tutorial'.

Page 35/183 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • Cakephp ACL authentication issue - I'm locked out

    - by Baseer
    I've followed the CakePHP Cookbook ACL tutorial And as of right now I'm just trying to add users using the scaffolding method. I'm trying to go to /users/add but it always redirects me to the login screen even though I have added $this->Auth->allow('*'); in beforeFilter() temporarily to allow access to all pages. I've done this in both the UsersController and GroupsController as the tutorial asked. Below is my code for UsersController which I think will be the most relevant of all the files. Let me know if any other piece of code is required. <?php class UsersController extends AppController { var $name = 'Users'; var $scaffold; function beforeFilter() { parent::beforeFilter(); $this->Auth->allow('*'); } function login() { //Auth Magic } function logout() { //Leave empty for now. } } ?> I think I've pretty much followed the tutorial, any ideas as to what I may be missing? Thanks. I've been stuck on this for a while. =(

    Read the article

  • UIWebView selection

    - by Infinity
    Hello guys! I subclassed the UIWindow to get the clicks from UIWebView. I used a tutorial for this. Here is the link: Tutorial It is working, but after I've got and other tutorial from the writer where he shows how to highlight the text in UIWebView. Here's the second link. But I can't use this second one. I tried to run this code in the - (void)userDidTapWebView:(id)tapPoint; delegate method. This delegate method is called when I tap in the UIWebView. Maybe somebody can help me to correct the script? I would like to get the tapped anchor's link. I tried many different java script, but until now I couldn't get the tapped anchor's link. One of the tries is when I tried to add a function to document.onClick. But I tap an anchor then it needs a second tap to get the anchor and I can't get the link neither with this method.

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • How to create a paging scrollView with space between views

    - by user1558819
    I followed the tutorial about how to create a scrollView Page Control: http://www.iosdevnotes.com/2011/03/uiscrollview-paging/ This tutorial is really good and I implement well the code. Here my question: I want to put a space between the my PageViews, but when it change the page it show the space between the views in the next page. The scroll must stop after the space when I change the page. Someone can help me? I change the tutorial code here: - (void)viewDidLoad { [super viewDidLoad]; NSArray *colors = [NSArray arrayWithObjects:[UIColor redColor], [UIColor greenColor], [UIColor blueColor], nil]; #define space 20 for (int i = 0; i < colors.count; i++) { CGRect frame; frame.origin.x = (self.scrollView.frame.size.width + space) * i; frame.origin.y = 0; frame.size = self.scrollView.frame.size; UIView *subview = [[UIView alloc] initWithFrame:frame]; subview.backgroundColor = [colors objectAtIndex:i]; [self.scrollView addSubview:subview]; [subview release]; } self.scrollView.contentSize = CGSizeMake(self.scrollView.frame.size.width * colors.count+ space*(colors.count-1), self.scrollView.frame.size.height); } Thanks,

    Read the article

  • Why does PEAR get installed to my user directory?

    - by webworm
    I am new to Linux and I am attempting to install the PHP PEAR library on a virtual server which is running Ubuntu. I am following a tutorial that covers installing PEAR but have run up against an area where I am confused. When running the PEAR installation program I am prompted as to what I want the INSTALL_PREFIX to be. Evidently the INSTALL_PREFIX, among other things, determines where PEAR will be installed. The tutorial suggest the value of INSTALL_PREFIX be the following path ... "/home/MY_USER_NAME/pear" where MY_USER_NAME = my user account Having come from a Windows world, applications are installed on the system where everyone can use them. If I install PEAR underneath my user directory will other developers on the system be able to make use of PEAR in their PHP scripts? I want to make PEAR available to all users and not just myself. Could someone explain to me the difference between installing for all users and installing just for myself? Does the install location matter? Should I be installing PEAR in a different location? Thanks for any suggestions. P.S. The tutorial I am following is located at the following URL ... http://articles.sitepoint.com/article/getting-started-with-pear/2

    Read the article

  • Learning Java Swing (GUI builder or not?)

    - by Paul
    Well I know basic Java and wanted to learn Swing so of course looked at the Sun website first, where this tutorial is. I was going to start it but realised it relied heavily on NetBeans, which I'm not sure about. I'm not sure because it's learning that I want to acheive, not a nice looking program. So I thought using NetBeans like this would be great once I know it, but I don't want to be building things without a clue what's going on underneath, and of course this could also cause problems later. My first question is is this the right way to do it, should I try not to rely on an IDE heavily? Looking through questions on the site most people recommend using the Sun tutorial, and I've only seen one answer that agrees with what I'm thinking, and they linked to this resource which looks promising. Or perhaps I'm getting the wrong idea of the Sun tutorial, perhaps it doesn't rely on the IDE, it just seemed like that. My second question is, if you agree with me, what resources (apart from the one above) would you recommend? Thanks for your answers.

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • WAMP, DD-WRT, Using DNSMasq to access multiple virtual hosts via aliases on local network

    - by christian
    I've updated my question to coincide with my progress I am attempting to configure WAMP on my development machine. I would like to use it in conjunction with DD-WRT's DNSMasq feature to access multiple virtual hosts via aliases (dev1.local,dev2.local,etc.) over my local wireless network. I have followed the following tutorial which allows me to setup virtual hosts and access on local network by ip addresses. http://www.logicspot.com/web-development-2/viewing-a-locally-hosted-website-with-your-smartphone/ I've got this up and running. For simplicity sake I'd still like to setup DNSMasq to connect via aliases. I have followed the following tutorial http://www.question-defense.com/2008/12/29/add-static-dns-entries-to-dd-wrt-router-firmware And the aliases load on my development machine, but I can not access via the aliases on my mobile devices connected to the local network. I can however access via IPs. Thanks for any help you can provide

    Read the article

  • Problem virtualizing Ubuntu 10.04 32 bit on VirtualBox 3.1 on Windows Vista 64 bit

    - by Adam Siddhi
    Software & Hardware Setup Host System : Windows Vista Home Premium SP1 64 bit Guest : Ubuntu 10.04 (ubuntu-10.04-desktop-i386.iso) 32 bit VM : VirtualBox 3.1.8 Hardware : Intel Core 2 Duo T6400 4GB SDRAM What Happened I followed the tutorial called Installing Ubuntu inside Windows using VirtualBox located here: www.psychocats.net/ubuntu/virtualbox At first I downloaded ubuntu-10.04-desktop-amd64.iso because I figured that it would be a perfect fit with my Vista 64 OS. I was wrong because it turns out the my Intel Core 2 Duo T6400 CPU does not have Intel® Virtualization Technology. So I had to go with the ubuntu-10.04-desktop-i386.iso which is 32 bit. This got me to the point where I could actually create the Ubuntu VM. So I set up the VM in VirtualBox (according to the tutorial I was following) to prepare for the Ubuntu 10.04 virtualization. Please go to my Picassa web album to see the screen shots of my VM settings and Ubuntu boot process so you can see what I experienced (they appear in the order that I experienced them in). www.picasaweb.google.com/rubysiddhi/ProblemVirtualizingUbuntu100432BitOnVirtualBox31OnWindowsVista64# The first 17 images show the VM settings. The last 8 show my attempt at virtualizing Ubuntu 10.04. You can see booting up but ultimately failing. The Specifics The one error message I got was: (process:210): GLib-WARNING **: getpwuid_r(): failed due to unknown user id (0) It appeared on a black screen that sort of looked like a Windows console screen but with out the c:\ or the ability to type. Then this error message got more complex when tons of text appeared in the screen. Pictures 23 - 25 in the album show this text. I should also mention that I found this post in the Ubuntu forums by zonination who seemed to have similar problems to mine even though they had a different set up. The main issue I think zonination and me may be having is the fact that we can not change the color mode to 32 bit while it is booting. I think the 16 bit color mode maybe making Ubuntu fail. Not certain though. Well I hope I explained my problem thoroughly and clearly. Thanks for the tutorial. It got me started but, now I hope to finish this process so I can start developing in Ubuntu. OH by the way if you want to actually see what happened play by play (with some classical in the background) check out the video I made over here: http://www.youtube.com/watch?v=XMbbm5E_0Xw Thanks! Regards, Adam

    Read the article

  • Any software transforming broken lines into curves?

    - by user32931
    Hello, do you know of any software that would help me transform a broken line into a curved line? For example, I have an octagon or a heptagon and I want it to be transformed into something resembling a circle. if you know such software, please, let me know. Thank You! Update A: Here is an image from the tutorial given to me by Jamie Keeling (right now it's the first answer below). At least the picture there represents what I want. In that tutorial this process is called "flattening paths". I will try to put that image right here, but if it doesn't get displayed, you can find it by this URL: http://msdn.microsoft.com/en-us/library/ms536364%28v=VS.85%29.aspx The red line in the picture is what I would want to submit, and the blue line is what I would want to get in the end:

    Read the article

  • Can't set-up Wifi Adhoc on my Raspberry Pi with an USB dongle

    - by Wouter
    I am trying to set-up an access point (ad-hoc) for my Raspberry Pi. That means I'm trying to "share" the ethernet connection over Wi-Fi. I am doing this using my Ralink Technology, Corp. RT2501/RT2573 Wireless Adapter. When following a tutorial (or actually every tutorial), it immediately goes wrong. root@pinkypi:/home/pi# iwconfig wlan0 mode ad-hoc Error for wireless request "Set Mode" (8B06) : SET failed on device wlan0 ; Device or resource busy. I already tried ifdown and not having it in the USB port at the startup. If it helps, every action with the thing fail (or at least setting the mode). I am using Debian. I'm sure I'm overseeing something, but I can't find out what. What is wrong?

    Read the article

  • Device cannot be added on software-raid-1 array on Ubuntu 12.04

    - by George Pligor
    Unfortunately all tutorials I have found online until now on how to setup software-raid-1 are outdated on ubuntu 12.40 My target is to setup it on a system with a secondary disk drive that is already running. Format is not an option! I am trying to follow and adapt from 11.10 to 12.04 the following tutorial: http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-lvm-system-incl-grub2-configuration-ubuntu-11.10-p2 On the above tutorial there is a successful command which creates a raid-1 array by setting the first disk drive with the installed system as missing: mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1 But when the time comes to add the first main drive with the installed system on the raid-array with this command: mdadm --add /dev/md0 /dev/sda1 I receive an error message. The error message says that the device /dev/sda is (which makes sense) busy! Note: hardware raid solution is not available since the system is a laptop with two disk drives! Thank you

    Read the article

  • How to change MySQL data directory?

    - by Jonathan Frank
    I want to place my databases in another directory, so I can store them in an ESB (elastic block storage, just a fancy name for a virtualized harddisk) together with my web-apps and other persistent data. I have tried to walk through a tutorial at http://crashmag.net/change-the-default-mysql-data-directory-with-selinux-enabled. Everything seems fine until I type this command: # semanage fcontext -a -t mysqld_db_t "/srv/mysql(/.*)?" Then the command fails and tells me that mysqld_db_t is an invalid SELinux context even if the default MySQL data directory is labelled with this context. I am running Fedora 15 on Virtualbox (behaves like an ordinary x86-compatible box) and Amazon EC2 (based on Xen) so the tutorial should be compatible. It is also worth to mention that turning off SELinux globally or just for the MySQL process is not an option, because such a solution will decrease the security of the system if a hacker gains access to the system via the MySQL server. I have never seen this problem before I changed to the Redhat/Fedora architecture, so it could be a distribution specific issue. Any help is highly appreciated

    Read the article

  • Postfix "warning: cannot get RSA private key from file"

    - by phew
    I just followed this tutorial to set up a postfix mailserver with dovecot and mysql as backend for virtual users. Now I got the most parts working, I can connect to pop3 pop3s imap and imaps. Using echo TEST-MAIL | mail [email protected] works fine, when I log into my hotmail account it shows the email. It also works in reverse hence my MX entry for mydomain.com finally has been propagated, so I am being able to receive emails sent from [email protected] to [email protected] and view them in Thunderbird using STARTTLS via IMAP. Doing a bit more research after I got the error message "5.7.1 : Relay access denied" when trying to send mails to [email protected] using Thunderbird being logged into [email protected], I figured out that my server was acting as an "Open Mail Relay", which - ofcourse - is a bad thing. Digging more into the optional parts of the tutorial like shown workaround.org/comment/2536 and workaround.org/ispmail/squeeze/postfix-smtp-auth I decided to complete these steps aswell to be able to send mails via [email protected] through Mozilla Thunderbird, not getting the error message "5.7.1 : Relay access denied" anymore (as common mailservers reject open relayed emails). But now I ran into an error trying to get postfix working with SMTPS, in /var/log/mail.log it reads Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: cannot get RSA private key from file /etc/ssl/certs/postfix.pem: disabling TLS support Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: TLS library problem: 20251:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: ANY PRIVATE KEY: Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: TLS library problem: 20251:error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib:ssl_rsa.c:669: That error is logged right after I try to send a mail from my newly installed mailserver using SMTP SSL/TLS via port 465 in Thunderbird. Thunderbird then tells me a timeout occured. Google has a few results concerning that problem, yet I couldn't get it working with any of those. I would link some of them here but as a new user I am only allowed to use two hyperlinks. My /etc/postfix/master.cf looks like smtp inet n - - - - smtpd smtps inet n - - - - smtpd -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes and nmap tells me PORT STATE SERVICE [...] 465/tcp open smtps [...] my /etc/postfix/main.cf looks like smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no #smtpd_tls_cert_file = /etc/ssl/certs/postfix.pem #default postfix generated #smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key #default postfix generated smtpd_tls_cert_file = /etc/ssl/certs/postfix.pem smptd_tls_key_file = /etc/ssl/private/postfix.pem smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smptd_sasl_auth_enable = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination myhostname = mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf virtual_transport = dovecot dovecot_destination_recipient_limit = 1 mailbox_command = /usr/lib/dovecot/deliver The *.pem files were created like described in the tutorial above, using Postfix To create a certificate to be used by Postfix use: openssl req -new -x509 -days 3650 -nodes -out /etc/ssl/certs/postfix.pem -keyout /etc/ssl/private/postfix.pem Do not forget to set the permissions on the private key so that no unauthorized people can read it: chmod o= /etc/ssl/private/postfix.pem You will have to tell Postfix where to find your certificate and private key because by default it will look for a dummy certificate file called "ssl-cert-snakeoil": postconf -e smtpd_tls_cert_file=/etc/ssl/certs/postfix.pem postconf -e smtpd_tls_key_file=/etc/ssl/private/postfix.pem I think I don't have to include /etc/dovecot/dovecot.conf here, as login via imaps and pop3s works fine according to the logs. Only problem is making postfix properly use the self-generated, self-signed certificates. Any help appreciated! EDIT: I just tried this different tutorial on generating a self-signed certificate for postfix, still getting the same error. I really don't know what else to test. I also did check for the SSL libraries, but all seems to be fine: root@domain:~# ldd /usr/sbin/postfix linux-vdso.so.1 => (0x00007fff91b25000) libpostfix-global.so.1 => /usr/lib/libpostfix-global.so.1 (0x00007f6f8313d000) libpostfix-util.so.1 => /usr/lib/libpostfix-util.so.1 (0x00007f6f82f07000) libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00007f6f82cb1000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007f6f82910000) libsasl2.so.2 => /usr/lib/libsasl2.so.2 (0x00007f6f826f7000) libdb-4.8.so => /usr/lib/libdb-4.8.so (0x00007f6f8237c000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007f6f82164000) libresolv.so.2 => /lib/libresolv.so.2 (0x00007f6f81f4e000) libc.so.6 => /lib/libc.so.6 (0x00007f6f81beb000) libdl.so.2 => /lib/libdl.so.2 (0x00007f6f819e7000) libz.so.1 => /usr/lib/libz.so.1 (0x00007f6f817d0000) libpthread.so.0 => /lib/libpthread.so.0 (0x00007f6f815b3000) /lib64/ld-linux-x86-64.so.2 (0x00007f6f83581000) After following Ansgar Wiechers instructions its finally working. postconf -n contained the lines as it should. The certificate/key check via openssl did show that both files are valid. So it indeed has been a permissions problem! Didn't know that chown'ing the /etc/ssl/*/postfix.pem files to postfix:postfix is not enough for postfix to read the files.

    Read the article

  • Linux Mint Wireless doesn't connect [migrated]

    - by guisantogui
    I'm having a great problem, I've installed Linux mint debian edition (LMDE), and following this tutorial http://community.linuxmint.com/tutorial/view/161 I did installed the network driver. The available connections appears to me, but when i try to connect to my connection at first time, I got this message: "(4) Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken." And the following tries, I got this another message: "(32) Insufficient privileges." I'm accepting ideas. Thanks.

    Read the article

  • how do I match movement of an object from 2d video into a 3d package ?

    - by George Profenza
    I'm trying to add objects in a 3d package(Blender) using recorded footage. I've played with Icarus and it's great to capture the camera movement. Also the Blender 2.41 importer script works in Blender 2.49 as well. The problem is I can't seem to get 3d coordinates for objects. I have tried Autodesk(RealVIZ) MatchMover 2011 and gone through the tutorials. Tutorial 3 shows how to link a vertex from a 3d mesh to a 2d trackpoint, but the setup is for camera movement. Tutorial 4 goes into Motion capture, but it uses 2 videos of the same motion taken with 2 cameras from different viewpoints. I've tried to bypass that using the same footage twice, but that failed, as the 3d coordinate system ends up messed up. What software do you recommend for this (mapping 3d coordinates to 2d tracked points and importing them into a 3d package) ? What is the recommended technique ? Any good examples out there ? Thanks, George

    Read the article

  • Logging Apparently Not Working - Can't Set Configure Replication

    - by square_eyes
    I'm using a tutorial to set up a master (single) slave setup. When I run the command show master status; I get Empty set (0.00 sec). I have done a lot of research and believe that the log file I have set up isn't being used properly. The master is a Windows server, and I am administering it through workbench. The log file (log-bin.log) I have configured in the options exists, and is accessible by 'System'. But when I open it as a text file it doesn't have any data. So I'm assuming it's not getting used properly. How can I get the logging happening so I can finish setting up this replication? Edit: Purely for reference here is the tutorial. I followed all the steps and double checked everything up until show master status.

    Read the article

  • Mac OS X & Linux: mount_nfs: can't access /nfs: Permission denied

    - by MountainX
    I have an Ubuntu 12.04 NFS server and I have an iMac NFS client running OS X 10.6.8. I believe I have everything set up properly, yet I still get this error on the Mac: mount_nfs: can't access /nfs: Permission denied My exports on the Linux server uses the insecure option like this: /export/home/me/ 192.168.100.132(rw,subtree_check,insecure,nohide) Where 192.168.100.132 is the address of my Mac. I have even tried using -o resvport on the Mac (in addition to insecure on Linux) and I still get the same error as above. $ sudo mount -t nfs -o resvport 192.168.100.1:/home/me /Users/me/mount Here is the output of showmount: # showmount -e 192.168.100.1 Export list for 192.168.100.1: /export/home/me 192.168.100.132 .... I have reviewed this similar question: How to mount NFS export on Mac OS X? And I have reviewed this frequently recommended tutorial: http://www.cyberciti.biz/faq/apple-mac-osx-nfs-mount-command-tutorial/ I still can't find a solution. Any ideas?

    Read the article

  • active directory servers synchronization

    - by Mit Naik
    I have 3 AD servers with windows server 2008 R2 at 3 different places, main server is at datacenter and 2 are in our local office which are at 2 different place. I want to synchornize all the 3 server together, were datacenter server should be central server and rest 2 servers should synch with the datacenter server. Please provide us the steps or tutorial to do this. Also we want that once the changes are done in 1 of the AD server the changes are automatically done in all the servers. For example if I change the password of user in our local server it should be updated in our main AD server and other branch server too. Please provide us the steps or tutorial to do this asap. I have one more question I have already created main datacenter AD as domain.local and other domains as xyz.local and abc.local, how can I replicate the additional AD domains with main datacenter DC, also do we require VPN connection, is there any other way to replicate the servers without using VPN connection?

    Read the article

  • Is JQuery UI meant to work only with Google Chrome??? (How about IE and Firefox??!)

    - by Richard77
    Hello, I'm using "Jquery UI 1./Dan Wellman/Packt Publishing" to learn JQuery UI. I'm working on the 'Dialog widget' chapiter. After I've completed a series of exercises in order to build a Dialog widget (using Google Chrome), I then tried my work with Internet Explorer and Firefox. The result has been disappointing. Chrome was perfet With Internet Explorer, (1) the title of the Dialog widget did not appear, (2) The location of the dialog widget was not correct (given the position: ["center", "center"]). It was rather offset toward left. With Firefox, the location was respected. However, only the outer container was visible. the content was missing, just a blank container. Also using Option Show:true and Hide:true did only work with Chrome. I wonder now if JQuery UI was meant to be used only with Google Chrome. I just think that I might be missing some directives to make it work with major browsers (as the author claimed in his book). Here's the code. Since, I'm using ASP.NET MVC, certain codes, such as the element to the css, do not appear. But, for the rest, all the functioning code is bellow. <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <p> The goal of this tutorial is to explain one method of creating model classes for an ASP.NET MVC application. In this tutorial, you learn how to build model classes and perform database access by taking advantage of Microsoft LINQ to SQL. In this tutorial, we build a basic Movie database application. We start by creating the Movie database application in the fastest and easiest way possible. We perform all of our data access directly from our controller actions. </p> <div style = "font-size:.7em" id = "myDialog" title = "This is the title"> In this tutorial -- in order to illustrate how you can build model classes -- we build a simple Movie database application. The first step is to create a new database. Right-click the App_Data folder in the Solution Explorer window and select the menu option Add, New Item. Select the SQL Server Database template, give it the name MoviesDB.mdf, and click the Add button (see Figure 1). </div> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ScriptContent" runat="server"> <script src="../../Content/development-bundle/jquery-1.3.2.js" type="text/javascript"></script> <script src="../../Content/development-bundle/ui/ui.core.js" type="text/javascript"></script> <script src="../../Content/development-bundle/ui/ui.dialog.js" type="text/javascript"></script> <script src="../../Content/development-bundle/ui/ui.draggable.js" type="text/javascript"></script> <script src="../../Content/development-bundle/ui/ui.resizable.js" type="text/javascript"></script> <script src="../../Content/development-bundle/external/bgiframe/jquery.bgiframe.js" type="text/javascript"></script> <script type = "text/javascript"> $(function() { var execute = function() { } var cancel = function() { } var dialogOpts = { position: ["center", "center"], title: '<a href="/Home/About">A link title!<a>', modal: true, minWidth: 500, minHeight: 500, buttons: { "OK": execute, "Cancel": cancel }, show:true, hide: true, bgiframe:true }; $("#myDialog").dialog(dialogOpts); }); </script> Thank for helping.

    Read the article

  • EC2 out of space on root disk, moving it to ephemeral

    - by Joseph Misiti
    I am spawning a few test servers on ec2 that happen to be m1.larges. I am using these test servers for load balancin testing. Anyways, most of the servers I have used before have been backed by EBS, but these instances (ubuntu 11.04) obviously come with a lot of ephemeral space located @ /mnt. What I noticed that is happening is I am running on space on the root disk. I am trying out this tutorial http://www.turnkeylinux.org/docs/using-instance-storage moving my /home + /usr directories to /mnt and then remounting them. This works except it does not survive a reboot. Am I missing something here or is this tutorial not completely correct. How do I make space on my / drive so I can do stuff and survive re-boots.

    Read the article

  • Setting up MySQL database replication [without restarting mysql]

    - by FunkyChicken
    I'm trying to setup MySQL db replication, it seems pretty straight forward. I was using this tutorial: http://www.howtoforge.com/mysql_database_replication Now I run a rather large MySQL database for a very large website, and in this tutorial it asks me to restart MySQL to apply the new settings in the /etc/my.cnf file. I'm try to avoid that step at all costs, as I know that restarting MySQL can take a few minutes on my machine (due to large logs/dbs), and I don't want any downtime. Is there a way to apply the necessary settings WITHOUT fully restarting Mysql?

    Read the article

  • VERR_NOT_SUPPORTED when trying to create a windows 8 image in Virtualbox

    - by Bart Burg
    I'm trying to create a Virtualbox image of windows 8 consumer preview. I tried this tutorial: http://www.addictivetips.com/windows-tips/how-to-install-windows-8-on-virtualbox/ I did exactly what was said in that tutorial. At the step "Now navigate to the Windows 8 developer build ISO file that you downloaded and select it." I get the error "Could not get the storage format. (VERR_NOT_SUPPORTED)". I also get this if I use the regular wizard. Host OS: windows 7 Virtualbox version: 4.1.16 Both windows 8 64 bit and 32 bit tested.

    Read the article

  • Varnish and plesk

    - by Raphaël
    I have an e-commerce site with 300,000 products and 20,000 categories. It is slow and currently in production. I decided to install Varnish to speed up. The trouble is that during installation, I got a Guru Meditation. Since the site is in production, I am not allowed to leave this error more than a second, thinking to have made an enormous stupidity. I followed the following tutorial: http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial I'm sure I followed all without error. I say that there may be a specific configuration with plesk. Has anyone already installed Varnish on a ubuntu 10.04 server with plesk 10? Does anyone have a better resource? I know it is "very vague" as an error, but maybe some of you have had this problem. Sincerely,

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >