Search Results

Search found 26969 results on 1079 pages for 'prevent default'.

Page 195/1079 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part I, Notation

    - by Ralf Westphal
    You want to avoid the pitfalls of object oriented design? Then this is the right place to start. Use Flow-Oriented Analysis (FOA) and –Design (FOD or just FD for Flow-Design) to understand a problem domain and design a software solution. Flow-Orientation as described here is related to Flow-Based Programming, Event-Based Programming, Business Process Modelling, and even Event-Driven Architectures. But even though “thinking in flows” is not new, I found it helpful to deviate from those precursors for several reasons. Some aim at too big systems for the average programmer, some are concerned with only asynchronous processing, some are even not very much concerned with programming at all. What I was looking for was a design method to help in software projects of any size, be they large or tiny, involing synchronous or asynchronous processing, being local or distributed, running on the web or on the desktop or on a smartphone. That´s why I took ideas from all of the above sources and some additional and came up with Event-Based Components which later got repositioned and renamed to Flow-Design. In the meantime this has generated some discussion (in the German developer community) and several teams have started to work with Flow-Design. Also I´ve conducted quite some trainings using Flow-Orientation for design. The results are very promising. Developers find it much easier to design software using Flow-Orientation than OOAD-based object orientation. Since Flow-Orientation is moving fast and is not covered completely by a single source like a book, demand has increased for at least an overview of the current state of its notation. This page is trying to answer this demand by briefly introducing/describing every notational element as well as their translation into C# source code. Take this as a cheat sheet to put next to your whiteboard when designing software. However, please do not expect any explanation as to the reasons behind Flow-Design elements. Details on why Flow-Design at all and why in this specific way you´ll find in the literature covering the topic. Here´s a resource page on Flow-Design/Event-Based Components, if you´re able to read German. Notation Connected Functional Units The basic element of any FOD are functional units (FU): Think of FUs as some kind of software code block processing data. For the moment forget about classes, methods, “components”, assemblies or whatever. See a FU as an abstract piece of code. Software then consists of just collaborating FUs. I´m using circles/ellipses to draw FUs. But if you like, use rectangles. Whatever suites your whiteboard needs best.   The purpose of FUs is to process input and produce output. FUs are transformational. However, FUs are not called and do not call other FUs. There is no dependency between FUs. Data just flows into a FU (input) and out of it (output). From where and where to is of no concern to a FU.   This way FUs can be concatenated in arbitrary ways:   Each FU can accept input from many sources and produce output for many sinks:   Flows Connected FUs form a flow with a start and an end. Data is entering a flow at a source, and it´s leaving it through a sink. Think of sources and sinks as special FUs which conntect wires to the environment of a network of FUs.   Wiring Details Data is flowing into/out of FUs through wires. This is to allude to electrical engineering which since long has been working with composable parts. Wires are attached to FUs usings pins. They are the entry/exit points for the data flowing along the wires. Input-/output pins currently need not be drawn explicitly. This is to keep designing on a whiteboard simple and quick.   Data flowing is of some type, so wires have a type attached to them. And pins have names. If there is only one input pin and output pin on a FU, though, you don´t need to mention them. The default is Process for a single input pin, and Result for a single output pin. But you´re free to give even single pins different names.   There is a shortcut in use to address a certain pin on a destination FU:   The type of the wire is put in parantheses for two reasons. 1. This way a “no-type” wire can be easily denoted, 2. this is a natural way to describe tuples of data.   To describe how much data is flowing, a star can be put next to the wire type:   Nesting – Boards and Parts If more than 5 to 10 FUs need to be put in a flow a FD starts to become hard to understand. To keep diagrams clutter free they can be nested. You can turn any FU into a flow: This leads to Flow-Designs with different levels of abstraction. A in the above illustration is a high level functional unit, A.1 and A.2 are lower level functional units. One of the purposes of Flow-Design is to be able to describe systems on different levels of abstraction and thus make it easier to understand them. Humans use abstraction/decomposition to get a grip on complexity. Flow-Design strives to support this and make levels of abstraction first class citizens for programming. You can read the above illustration like this: Functional units A.1 and A.2 detail what A is supposed to do. The whole of A´s responsibility is decomposed into smaller responsibilities A.1 and A.2. FU A thus does not do anything itself anymore! All A is responsible for is actually accomplished by the collaboration between A.1 and A.2. Since A now is not doing anything anymore except containing A.1 and A.2 functional units are devided into two categories: boards and parts. Boards are just containing other functional units; their sole responsibility is to wire them up. A is a board. Boards thus depend on the functional units nested within them. This dependency is not of a functional nature, though. Boards are not dependent on services provided by nested functional units. They are just concerned with their interface to be able to plug them together. Parts are the workhorses of flows. They contain the real domain logic. They actually transform input into output. However, they do not depend on other functional units. Please note the usage of source and sink in boards. They correspond to input-pins and output-pins of the board.   Implicit Dependencies Nesting functional units leads to a dependency tree. Boards depend on nested functional units, they are the inner nodes of the tree. Parts are independent, they are the leafs: Even though dependencies are the bane of software development, Flow-Design does not usually draw these dependencies. They are implicitly created by visually nesting functional units. And they are harmless. Boards are so simple in their functionality, they are little affected by changes in functional units they are depending on. But functional units are implicitly dependent on more than nested functional units. They are also dependent on the data types of the wires attached to them: This is also natural and thus does not need to be made explicit. And it pertains mainly to parts being dependent. Since boards don´t do anything with regard to a problem domain, they don´t care much about data types. Their infrastructural purpose just needs types of input/output-pins to match.   Explicit Dependencies You could say, Flow-Orientation is about tackling complexity at its root cause: that´s dependencies. “Natural” dependencies are depicted naturally, i.e. implicitly. And whereever possible dependencies are not even created. Functional units don´t know their collaborators within a flow. This is core to Flow-Orientation. That makes for high composability of functional units. A part is as independent of other functional units as a motor is from the rest of the car. And a board is as dependend on nested functional units as a motor is on a spark plug or a crank shaft. With Flow-Design software development moves closer to how hardware is constructed. Implicit dependencies are not enough, though. Sometimes explicit dependencies make designs easier – as counterintuitive this might sound. So FD notation needs a ways to denote explicit dependencies: Data flows along wires. But data does not flow along dependency relations. Instead dependency relations represent service calls. Functional unit C is depending on/calling services on functional unit S. If you want to be more specific, name the services next to the dependency relation: Although you should try to stay clear of explicit dependencies, they are fundamentally ok. See them as a way to add another dimension to a flow. Usually the functionality of the independent FU (“Customer repository” above) is orthogonal to the domain of the flow it is referenced by. If you like emphasize this by using different shapes for dependent and independent FUs like above. Such dependencies can be used to link in resources like databases or shared in-memory state. FUs can not only produce output but also can have side effects. A common pattern for using such explizit dependencies is to hook a GUI into a flow as the source and/or the sink of data: Which can be shortened to: Treat FUs others depend on as boards (with a special non-FD API the dependent part is connected to), but do not embed them in a flow in the diagram they are depended upon.   Attributes of Functional Units Creation and usage of functional units can be modified with attributes. So far the following have shown to be helpful: Singleton: FUs are by default multitons. FUs in the same of different flows with the same name refer to the same functionality, but to different instances. Think of functional units as objects that get instanciated anew whereever they appear in a design. Sometimes though it´s helpful to reuse the same instance of a functional unit; this is always due to valuable state it holds. Signify this by annotating the FU with a “(S)”. Multiton: FUs on which others depend are singletons by default. This is, because they usually are introduced where shared state comes into play. If you want to change them to be a singletons mark them with a “(M)”. Configurable: Some parts need to be configured before the can do they work in a flow. Annotate them with a “(C)” to have them initialized before any data items to be processed by them arrive. Do not assume any order in which FUs are configured. How such configuration is happening is an implementation detail. Entry point: In each design there needs to be a single part where “it all starts”. That´s the entry point for all processing. It´s like Program.Main() in C# programs. Mark the entry point part with an “(E)”. Quite often this will be the GUI part. How the entry point is started is an implementation detail. Just consider it the first FU to start do its job.   Patterns / Standard Parts If more than a single wire is attached to an output-pin that´s called a split (or fork). The same data is flowing on all of the wires. Remember: Flow-Designs are synchronous by default. So a split does not mean data is processed in parallel afterwards. Processing still happens synchronously and thus one branch after another. Do not assume any specific order of the processing on the different branches after the split.   It is common to do a split and let only parts of the original data flow on through the branches. This effectively means a map is needed after a split. This map can be implicit or explicit.   Although FUs can have multiple input-pins it is preferrable in most cases to combine input data from different branches using an explicit join: The default output of a join is a tuple of its input values. The default behavior of a join is to output a value whenever a new input is received. However, to produce its first output a join needs an input for all its input-pins. Other join behaviors can be: reset all inputs after an output only produce output if data arrives on certain input-pins

    Read the article

  • Looking into Enum Support in Entity Framework 5.0 Code First

    - by nikolaosk
    In this post I will show you with a hands-on demo the enum support that is available in Visual Studio 2012, .Net Framework 4.5 and Entity Framework 5.0. You can have a look at this post to learn about the support of multilple diagrams per model that exists in Entity Framework 5.0. We will demonstrate this with a step by step example. I will use Visual Studio 2012 Ultimate. You can also use Visual Studio 2012 Express Edition. Before I move on to the actual demo I must say that in EF 5.0 an enumeration can have the following types. Byte Int16 Int32 Int64 Sbyte Obviously I cannot go into much detail on what EF is and what it does. I will give again a short introduction.The .Net framework provides support for Object Relational Mapping through EF. So EF is a an ORM tool and it is now the main data access technology that microsoft works on. I use it quite extensively in my projects. Through EF we have many things out of the box provided for us. We have the automatic generation of SQL code.It maps relational data to strongly types objects.All the changes made to the objects in the memory are persisted in a transactional way back to the data store. You can find in this post an example on how to use the Entity Framework to retrieve data from an SQL Server Database using the "Database/Schema First" approach. In this approach we make all the changes at the database level and then we update the model with those changes. In this post you can see an example on how to use the "Model First" approach when working with ASP.Net and the Entity Framework. This model was firstly introduced in EF version 4.0 and we could start with a blank model and then create a database from that model.When we made changes to the model , we could recreate the database from the new model. You can search in my blog, because I have posted many posts regarding ASP.Net and EF. I assume you have a working knowledge of C# and know a few things about EF. The Code First approach is the more code-centric than the other two. Basically we write POCO classes and then we persist to a database using something called DBContext. Code First relies on DbContext. We create 2,3 classes (e.g Person,Product) with properties and then these classes interact with the DbContext class. We can create a new database based upon our POCOS classes and have tables generated from those classes.We do not have an .edmx file in this approach.By using this approach we can write much easier unit tests. DbContext is a new context class and is smaller,lightweight wrapper for the main context class which is ObjectContext (Schema First and Model First). Let's begin building our sample application. 1) Launch Visual Studio. Create an ASP.Net Empty Web application. Choose an appropriate name for your application. 2) Add a web form, default.aspx page to the application. 3) Now we need to make sure the Entity Framework is included in our project. Go to Solution Explorer, right-click on the project name.Then select Manage NuGet Packages...In the Manage NuGet Packages dialog, select the Online tab and choose the EntityFramework package.Finally click Install. Have a look at the picture below   4) Create a new folder. Name it CodeFirst . 5) Add a new item in your application, a class file. Name it Footballer.cs. This is going to be a simple POCO class.Place it in the CodeFirst folder. The code follows public class Footballer { public int FootballerID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public double Weight { get; set; } public double Height { get; set; } public DateTime JoinedTheClub { get; set; } public int Age { get; set; } public List<Training> Trainings { get; set; } public FootballPositions Positions { get; set; } }    Now I am going to define my enum values in the same class file, Footballer.cs    public enum FootballPositions    {        Defender,        Midfielder,        Striker    } 6) Now we need to create the Training class. Add a new class to your application and place it in the CodeFirst folder.The code for the class follows.     public class Training     {         public int TrainingID { get; set; }         public int TrainingDuration { get; set; }         public string TrainingLocation { get; set; }     }   7) Then we need to create a context class that inherits from DbContext.Add a new class to the CodeFirst folder.Name it FootballerDBContext.Now that we have the entity classes created, we must let the model know.I will have to use the DbSet<T> property.The code for this class follows       public class FootballerDBContext:DbContext     {         public DbSet<Footballer> Footballers { get; set; }         public DbSet<Training> Trainings { get; set; }     } Do not forget to add  (using System.Data.Entity;) in the beginning of the class file 8) We must take care of the connection string. It is very easy to create one in the web.config.It does not matter that we do not have a database yet.When we run the DbContext and query against it,it will use a connection string in the web.config and will create the database based on the classes. In my case the connection string inside the web.config, looks like this      <connectionStrings>    <add name="CodeFirstDBContext"  connectionString="server=.\SqlExpress;integrated security=true;"  providerName="System.Data.SqlClient"/>                       </connectionStrings>   9) Now it is time to create Linq to Entities queries to retrieve data from the database . Add a new class to your application in the CodeFirst folder.Name the file DALfootballer.cs We will create a simple public method to retrieve the footballers. The code for the class follows public class DALfootballer     {         FootballerDBContext ctx = new FootballerDBContext();         public List<Footballer> GetFootballers()         {             var query = from player in ctx.Footballers where player.FirstName=="Jamie" select player;             return query.ToList();         }     }   10) Place a GridView control on the Default.aspx page and leave the default name.Add an ObjectDataSource control on the Default.aspx page and leave the default name. Set the DatasourceID property of the GridView control to the ID of the ObjectDataSource control.(DataSourceID="ObjectDataSource1" ). Let's configure the ObjectDataSource control. Click on the smart tag item of the ObjectDataSource control and select Configure Data Source. In the Wizzard that pops up select the DALFootballer class and then in the next step choose the GetFootballers() method.Click Finish to complete the steps of the wizzard. Build your application.  11)  Let's create an Insert method in order to insert data into the tables. I will create an Insert() method and for simplicity reasons I will place it in the Default.aspx.cs file. private void Insert()        {            var footballers = new List<Footballer>            {                new Footballer {                                 FirstName = "Steven",LastName="Gerrard", Height=1.85, Weight=85,Age=32, JoinedTheClub=DateTime.Parse("12/12/1999"),Positions=FootballPositions.Midfielder,                Trainings = new List<Training>                             {                                     new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                    new Training {TrainingDuration = 2, TrainingLocation="Anfield"},                    new Training {TrainingDuration = 2, TrainingLocation="MelWood"},                }                            },                            new Footballer {                                  FirstName = "Jamie",LastName="Garragher", Height=1.89, Weight=89,Age=34, JoinedTheClub=DateTime.Parse("12/02/2000"),Positions=FootballPositions.Defender,                Trainings = new List<Training>                                             {                                 new Training {TrainingDuration = 3, TrainingLocation="MelWood"},                new Training {TrainingDuration = 5, TrainingLocation="Anfield"},                new Training {TrainingDuration = 6, TrainingLocation="Anfield"},                }                           }                    };            footballers.ForEach(foot => ctx.Footballers.Add(foot));            ctx.SaveChanges();        }   12) In the Page_Load() event handling routine I called the Insert() method.        protected void Page_Load(object sender, EventArgs e)        {                   Insert();                }  13) Run your application and you will see that the following result,hopefully. You can see clearly that the data is returned along with the enum value.  14) You must have also a look at the database.Launch SSMS and see the database and its objects (data) created from EF Code First.Have a look at the picture below. Hopefully now you have seen the support that exists in EF 5.0 for enums.Hope it helps !!!

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • How to VPN on demand Mac OS X?

    - by Kami
    I'm trying to configure the Snow Leopard's VPN on demand service without success I've tried the following domain+configuration pairs but none of them have worked: domain.net default *.domain.net default My goal is that each time I go to www.domain.net with Safari, ssh server1.domain.net or everything else on this domain.net the connection will be established trough the VPN ! I've tried plenty of different configs but it has never worked so far... Edit : R

    Read the article

  • Mono through FastCGI on nginx

    - by Stijn
    I'm going through http://www.mono-project.com/FastCGI_Nginx and can't get it to work. The FastCGI server seems to be running. The following is from the error log: upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: 192.168.1.125, server: arch, request: "GET /Default.aspx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "arch" Command used to start the server (I've tried server2 and server4, using a simple .NET 2.0 or .NET 4.0 project): fastcgi-mono-server2 /applications=arch:/:/var/www/test/public/ /socket=tcp:127.0.0.1:9000 /stopable=True nginx config: server { listen 80; server_name arch; access_log /var/www/test/log/access.log; error_log /var/www/test/log/error.log; location / { root /var/www/test/public; index index.html index.htm default.aspx Default.aspx; fastcgi_index Default.aspx; fastcgi_pass 127.0.0.1:9000; fastcgi_param PATH_INFO ""; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Using xsp4 works fine, I can browse the site. I've enabled FastCGI logging, this is the output: [2012-04-15 23:51:18Z] Debug Accepting an incoming connection. [2012-04-15 23:51:18Z] Notice Beginning to receive records on connection. [2012-04-15 23:51:18Z] Debug Record received. (Type: BeginRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 386) [2012-04-15 23:51:18Z] Debug Record received. (Type: Params, ID: 1, Length: 0) [2012-04-15 23:51:18Z] Debug Read parameter. (PATH_INFO = ) [2012-04-15 23:51:18Z] Debug Read parameter. (SCRIPT_FILENAME = /var/www/test/public/Home) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_HOST = arch) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_USER_AGENT = Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT = text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_LANGUAGE = en-gb,en;q=0.5) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_ACCEPT_ENCODING = gzip, deflate) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_CONNECTION = keep-alive) [2012-04-15 23:51:18Z] Debug Read parameter. (HTTP_COOKIE = ASP.NET_SessionId=2C3D702C9B0F23F69B80820B) [2012-04-15 23:51:18Z] Error Failed to process connection. Reason: Argument cannot be null. Parameter name: s [2012-04-15 23:51:18Z] Debug Record sent. (Type: EndRequest, ID: 1, Length: 8) [2012-04-15 23:51:18Z] Debug The FastCGI connection has been closed.

    Read the article

  • Load balancing with multiple gateways

    - by ttouch
    I have to different ISPs, each on each own network. The main connects via ethernet and the secondary via wifi. The two networks have no relation at all. I just connect to them simultaneously. The reason I want to load balance between them is to achieve higher Internet speeds. Note: I have no advanced network hardware. Just my pc and the two routers that I have no access... main network: if: eth0 gw: 192.168.178.1 my ip: 192.168.178.95 speed: 400 kbit/s secondary network: if: wlan0 gw: 192.168.1.1 my ip: 192.168.1.95 speed: 300 kbit/s A diagram to explain the situation: http://i.imgur.com/NZdsv.jpg I'm on Arch Linux x64. I use netcfg to configure the interfaces Configs: # /etc/network.d/main CONNECTION='ethernet' DESCRIPTION='A basic static ethernet connection using iproute' INTERFACE='eth0' IP='static' ADDR='192.168.178.95' # /etc/network.d/second CONNECTION='wireless' DESCRIPTION='A simple WEP encrypted wireless connection' INTERFACE='wlan0' SECURITY='wep' ESSID='wifi_essid' KEY='the_password' IP="static" ADDR='192.168.1.95' And I use iptables to load balance, rules: #!/bin/bash /usr/sbin/ip route flush table ISP1 2>/dev/null /usr/sbin/ip rule del fwmark 101 table ISP1 2>/dev/null /usr/sbin/ip route add table ISP1 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.95 metric 202 /usr/sbin/ip route add table ISP1 default via 192.168.178.1 dev eth0 /usr/sbin/ip rule add fwmark 101 table ISP1 /usr/sbin/ip route flush table ISP2 2>/dev/null /usr/sbin/ip rule del fwmark 102 table ISP2 2>/dev/null /usr/sbin/ip route add table ISP2 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.95 metric 202 /usr/sbin/ip route add table ISP2 default via 192.168.1.1 dev wlan0 /usr/sbin/ip rule add fwmark 102 table ISP2 /usr/sbin/iptables -t mangle -F /usr/sbin/iptables -t mangle -X /usr/sbin/iptables -t mangle -N MARK-gw1 /usr/sbin/iptables -t mangle -A MARK-gw1 -m comment --comment 'send via 192.168.178.1' -j MARK --set-mark 101 /usr/sbin/iptables -t mangle -A MARK-gw1 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw1 -j RETURN /usr/sbin/iptables -t mangle -N MARK-gw2 /usr/sbin/iptables -t mangle -A MARK-gw2 -m comment --comment 'send via 192.168.1.1' -j MARK --set-mark 102 /usr/sbin/iptables -t mangle -A MARK-gw2 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw2 -j RETURN /usr/sbin/iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment "this stream is already marked; escape early" -m mark ! --mark 0 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i eth0 -m conntrack --ctstate NEW -j MARK-gw1 /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i wlan0 -m conntrack --ctstate NEW -j MARK-gw2 /usr/sbin/iptables -t mangle -N DEF_POL /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p udp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -j DEF_POL /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound eth0' -o eth0 -s 192.168.0.0/16 -m mark --mark 101 -j SNAT --to-source 192.168.178.95 /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound wlan0' -o wlan0 -s 192.168.0.0/16 -m mark --mark 102 -j SNAT --to-source 192.168.1.95 /usr/sbin/ip route flush cache (this script was made by fukawi2, I don't know how to use iptables) but I have no Internet connection... output of iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 1254K packets, 1519M bytes) pkts bytes target prot opt in out source destination 1278K 1535M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK restore 21532 15M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* this stream is already marked; escape early */ mark match ! 0x0 582 72579 MARK-gw1 all -- eth0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 2376 696K MARK-gw2 all -- wlan0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 1257K 1520M DEF_POL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 1276K packets, 1535M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain DEF_POL (1 references) pkts bytes target prot opt in out source destination 1236K 1517M CONNMARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 15163 2041K CONNMARK udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 555 33176 MARK-gw1 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 555 33176 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 277 16516 MARK-gw2 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 277 16516 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 1442 384K MARK-gw1 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 1442 384K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 720 189K MARK-gw2 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 720 189K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 Chain MARK-gw1 (3 references) pkts bytes target prot opt in out source destination 2579 490K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.178.1 */ MARK set 0x65 2579 490K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 2579 490K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MARK-gw2 (3 references) pkts bytes target prot opt in out source destination 3373 901K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.1.1 */ MARK set 0x66 3373 901K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 3373 901K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • ASPX throws "404 The resource cannot be found"

    - by Diegoeche
    I'm deploying a website under a virtual directory using IIS. For some strange reason, Default.html works, but Default.aspx throws a 404. I have tried these: There's another virtual directory that contains an older version of the application and that one just works. I checked the properties of each virtual directory and they looked the same. I checked that the root didn't had any extra backslashes

    Read the article

  • How to connect SAN from CentOS through two iSCSI Targets

    - by garconcn
    I had asked the similar question before. This time I want to use subnet for two iSCSI Targets, hence I start a new question. I have an old Promise VTrak M500i SAN Server. It comes with 2 iSCSI ports. I want to connect to two LUNs on the SAN server through two separate Targets from CentOS 5.7 64bits server. My network setup is as follows: CentOS server: Management network - 192.168.1.1 Storage network 1 - 192.168.5.2 Storage network 2 - 192.168.6.2 Promise SAN server: Management network - 192.168.1.2 iSCSI Port 1 - 192.168.5.1 iSCSI Port 2 - 192.168.6.1 I have two Logical Drives on this SAN and they are mapped as follows: Index Initiator Name LUN Mapping 0 iqn.2011-11:backup (LD0,0) 1 iqn.2011-11:template (LD1,1) Basically, I want the traffic to iqn.2011-11:backup LUN 0 through 192.168.5.1 network the traffic to iqn.2011-11:template LUN 1 through 192.168.6.1 network I don't use MPIO, just want to separate the traffic to avoid traffic jam. How do I achieve this? I am new to SAN stuff, please explain as much detail as you can. Thank you. The following are what I am doing now. After mapping the LUN to my pre-defined Initiators, the CentOS server can discover both Targets. [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.5.1 192.168.5.1:3260,1 iscsi-1 192.168.6.1:3260,2 iscsi-1 [root@centos ~]# iscsiadm -m discovery -t sendtargets -p 192.168.6.1 192.168.6.1:3260,2 iscsi-1 192.168.5.1:3260,1 iscsi-1 [root@centos ~]# /etc/init.d/iscsi start iscsid is stopped Starting iSCSI daemon: [ OK ] [ OK ] Setting up iSCSI targets: Logging in to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] Logging in to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] Login to [iface: default, target: iscsi-1, portal: 192.168.6.1,3260] successful. Login to [iface: default, target: iscsi-1, portal: 192.168.5.1,3260] successful. [ OK ] [root@centos ~]# iscsiadm -m session tcp: [1] 192.168.6.1:3260,2 iscsi-1 tcp: [2] 192.168.5.1:3260,1 iscsi-1 When I check the LUN mapping on the SAN server for the two Logical Drives, both LUNs are connected through Port0-192.168.5.2 with the Initiator defined in CentOS. Assigned Initiator List: Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.centos centos.mydomain.com Port1-192.168.5.2 1 I assume the following is what I want: Initiator Name Alias IP Address LUN iqn.2011-11.backup centos.mydomain.com Port0-192.168.5.2 0 Initiator Name Alias IP Address LUN iqn.2011-11.template centos.mydomain.com Port0-192.168.6.2 1

    Read the article

  • Modify styles in OneNote 2010 beta

    - by jay
    I'm running the OneNote 2010 beta and wondering if there is a way to edit the default styles? If you change the default font, the styles wont be changed. Is there any template, registry key or file that allows for the styles to modified?

    Read the article

  • Decoding PAM configuration files ...

    - by Jamie
    Could someone point me to some (recent) documentation that would help me with decoding PAM configuration file lines like this: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass I'm trying to get my Ubuntu box (testing 10.04 Server Beta 2) to use Active Directory, and the last step is to get PAM on the unix box to work, but I'm wary about making changes (and locking myself out) without understanding how to merge what I'm reading here with what ubuntu has implemented.

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • why did preview stop working from firefox?

    - by kloop
    I just installed adobe reader, and now when I click on a PDF in the browser, it is not opened in preview. Adobe doesn't work either, and I am stuck without having the PDF viewed at all. I right clicked on a PDF, and preview is still my default viewer. So this: http://www.ehow.com/how_6764524_make-preview-default-pdf-viewer.html is not helping. Any ideas? I want to be able to use Preview in Firefox.

    Read the article

  • TAB completion not working in ubuntu hardy heron

    - by Tutul
    I have recently installed ubuntu hardy and found that shell command completion with TAB doesn't work, the package 'bash-completion' is installed in my system. I guess it is related to dash being the default shell? Is there a way to use tab completion in dash? If there isn't a way then how can i change my default shell to bash?

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • How to set per user mail quota for postfix using policyd v2?

    - by ACHAL
    I have configured cluebringer 2.0.7 mysql httpd and all services are running well . But now i want to set per user mail quota for outgoing mails and want to restrict for a fix number of mail. I have tried to setup a quota for my host r10.4reseller.org but not working Quota List Policy:- Default Outbound Name:-Default Outbound Track:-Sender:user@domain Period:-60 verdict:-REJECT Data:- Disabled:- no Quota Limits Type:- MessageCount Counter Limit:- 1 Disabled:-no Do I need to do anymore settings for quota ?

    Read the article

  • Safari keeps asking permission to access the keychain.

    - by GameFreak
    Normally when I save a password in Safari it will get added to my login keychain without fuss (assuming that it is already unlocked). But after I set a a master password the default keychain was changed to FileVaultMaster. When I set it back to login Safari then started to always ask for permission to access the keychain. To get it back to the default behavior should I chose always allow or is there something else I should do?

    Read the article

  • Share 3G connection over WiFi-LAN network

    - by kush.impetus
    This is how I have established network between my PC and my laptop at home (being novice in networking, it took me few days to achieve the feat). And it is working perfectly. I can easily share files between them. Laptop IP Address: 192.168.1.4 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 Desktop IP Address: 192.168.1.5 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 ASUS RT-N10+ Router IP Address: 192.168.1.4 Default Gateway: 192.168.1.2 I have connected the Desktop PC to the router using a LAN cable, and laptop to router over WiFi. Both, PC and laptop are running on Windows 7 OS, are on same HomeGroup, have same username / password. Also, I have connected the Ethernet cable to LAN port 1 of the router. Click here to view a graphical representation of the network. Can't post image here, because I don't have 10 reputation points. Now, what I want is use connect to Internet using a 3G USB modem on one device and share it over the network on the other. I tried Huawei and Micromax 3G USB modem. Both obtain a new IP address whenever I connect to Internet (means they have dynamic IPs). Rest, both have Subnet Mask as 255.255.255.255 and Default Gateway as 0.0.0.0. In that case, I cannot directly share Internet from the modem. Preferred DNS is blank for now in both, laptop and PC. What I am planning to do is to connect to Internet on laptop using the 3G modem and share the Internet connection over laptop's Wi-Fi (as hotspot) using Connectify, which I have done already. That, I suppose, will broadcast a static IP to connect to. Now what I can't figure out is that what changes should I make to the network settings of the router and the PC so that PC connects to the Internet broadcast by Connectify? Is that possible on the first hand? Please note that I am trying to implement the network without spending anything extra (for purchasing as USB WiFi adapter for PC, of course, which could have made the life lot easier for me). Thanks in advance

    Read the article

  • AAC Sample Rate and Bit Rate for High Quality Audio?

    - by marco.ragogna
    What are the AAC Sample Rate and Bit Rate settings to set in order to encode an audio track with a quality comparable to MP3 320kbps? I need to backup a DVD movie, the default settings for AAC are Bitrate (KB/s) 128 Sample Rate (HZ) 44100 should I set Bitrate (KB/s) 320 Sample Rate (HZ) 48000 or the default are already good?

    Read the article

  • Rsync and wildcards

    - by Jay White
    I am trying to back up both the "Last Session" and "Current Session" files for Google Chrome in one command, but using a wildcard doesn't seem to work. I am trying with the following command rsync -e "ssh -i new.key" -r --verbose -tz --stats --progress --delete '/cygdrive/c/Users/jay/AppData/Local/Google/Chrome/User Data/Default/*Session' user@host:"/chrome\ sessions/" and get the following error rsync: link_stat "/cygdrive/c/Users/jay/AppData/Local/Google/Chrome/User Data/Default/*Session" failed: No such file or directory (2) What am I doing wrong?

    Read the article

  • Specifying network settings during SLES 11 auto installation

    - by banjer
    I'm setting up an autoinst.xml file for auto-installing SLES 11. I get prompted for the various interface settings per below, but they don't seem to stick once the server reboots. I don't think I have the xml defined correctly. I'm hoping someone has experience with this. <ask-list> <ask> <path>networking,dns,hostname</path> <question>Enter Hostname (server name)</question> <stage>initial</stage> <default>merkin</default> </ask> <ask> <path>networking,interfaces,interface,0,device</path> <question>Enter the primary ethernet device:</question> <stage>initial</stage> <default>eth0</default> </ask> <ask> <path>networking,interfaces,interface,0,ipaddr</path> <question>Enter the primary IP Address:</question> <stage>initial</stage> </ask> <ask> <path>networking,interfaces,interface,0,netmask</path> <question>Enter the Netmask Address:</question> <stage>initial</stage> </ask> <ask> <path>networking,routing,routes,route,0,gateway</path> <question>Enter the primary Gateway Address:</question> <stage>initial</stage> </ask> </ask-list> The first one for hostname seems to be sticking just fine, but the rest do not. As an alternative, is there a way to stop the autoinstall at the section where you configure the network devices so that the user can take over? I was able to show the partition proposal, but not sure how to do the same with the networking setup.

    Read the article

  • MacVim, Command-T: SEGV

    - by Ramon Tayag
    Details: OSX 10.7.4 I installed the latest MacVim via Homebrew: $ command-t brew install macvim ==> Downloading https://github.com/b4winckler/macvim/tarball/snapshot-64 Already downloaded: /Library/Caches/Homebrew/macvim-7.3-64.tgz ==> ./configure --with-features=huge --with-tlib=ncurses --enable-multibyte --with-macarchs=x86_64 --enable-perlinterp --enable-pythoninterp --enable-rubyinterp --enable-t ==> make getenvy ==> make ==> Caveats MacVim.app installed to: /usr/local/Cellar/macvim/7.3-64 To link the application to a normal Mac OS X location: brew linkapps or: ln -s /usr/local/Cellar/macvim/7.3-64/MacVim.app /Applications ==> Summary /usr/local/Cellar/macvim/7.3-64: 1733 files, 27M, built in 53 seconds $ command-t brew linkapps Linking /usr/local/Cellar/macvim/7.3-64/MacVim.app Finished linking. Find the links under ~/Applications. $ command-t ruby -v ruby 1.8.7 (2011-12-28 patchlevel 357) [universal-darwin11.0] $ command-t rvm list rvm rubies ree-1.8.7-2012.02 [ i686 ] ruby-1.8.7-p358 [ i686 ] ruby-1.9.2-p290 [ x86_64 ] ruby-1.9.2-p320 [ x86_64 ] ruby-1.9.3-p194 [ x86_64 ] # Default ruby not set. Try 'rvm alias create default <ruby>'. # => - current # =* - current && default # * - default $ command-t cd ~/.vim/bundle/vim-command-t/ruby/command-t ruby extconf.rb $ command-t ruby extconf.rb checking for ruby.h... yes creating Makefile $ command-t make cc -arch i386 -arch x86_64 -pipe -bundle -undefined dynamic_lookup -o ext.bundle ext.o match.o matcher.o -L. -L/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib -L. -arch i386 -arch x86_64 -lruby -lpthread -ldl -lobjc ld: warning: ignoring file ext.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: ignoring file match.o, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: ignoring file matcher.o, file was built for unsupported file format which is not the architecture being linked (i386) $ command-t mvim MacVim then opens here. But when I open Command-T, MacVim crashes and I see this in the command line: $ command-t dyld: lazy symbol binding failed: Symbol not found: _rb_intern2 Referenced from: /Users/ramon/.vim/bundle/vim-command-t/ruby/command-t/ext.bundle Expected in: flat namespace dyld: Symbol not found: _rb_intern2 Referenced from: /Users/ramon/.vim/bundle/vim-command-t/ruby/command-t/ext.bundle Expected in: flat namespace Vim: Caught deadly signal TRAP Vim: Finished. The problem I have is very similar to this, except that I switched to the system Ruby and still got the error.

    Read the article

  • Accessing guests on virtual network when connected to host via PPTP

    - by Viktor Elofsson
    I'm setting up a development machine which runs Ubuntu 12.04 and KVM for virtualization. I have a guest running Ubuntu 12.04 which can be accessed from the host via its IP address which is assigned by libvirt. The guest can also access the internet, no problem there. However, now I want to setup PPTP so I can connect to the host (from my workstation running Windows 7) and directly access guests without relying on SSH port forwarding. I can connect from my W7-machine to the host (PPTP), but I cannot access any virtual machines (which are accessable from the host directly). Relevant configuration files cat /etc/network/interfaces auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address x.x.x.x broadcast x.x.x.x netmask x.x.x.x gateway x.x.x.x # default route to access subnet up route add -net x.x.x.x netmask x.x.x.x gw x.x.x.x eth0 virsh net-edit default <network> <name>default</name> <uuid>xxxxxxxx-72ce-3c20-af0f-d3a010f1bef0</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:xx:xx:xx'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> <host mac='52:54:00:yy:yy:yy' name='web1' ip='192.168.122.11' /> </dhcp> </ip> </network> cat /etc/pptpd.conf (commented lines removed) # TAG: option # Specifies the location of the PPP options file. # By default PPP looks in '/etc/ppp/options' # option /etc/ppp/pptpd-options # TAG: logwtmp # Use wtmp(5) to record client connections and disconnections. # logwtmp #(Recommended) localip 192.168.122.1 remoteip 192.168.122.234-238,192.168.122.245 cat /etc/ppp/chap-secrets* # Secrets for authentication using CHAP # client server secret IP addresses xxxxx * yyyyyyyyyy 192.168.122.100 I get the correct IP address when connecting my W7-machine, but when I try to ping the virtual machine at 192.168.122.11 I get Reply from 192.168.122.1: Destination port unreachable. It's probably something trivial I'm missing but I can't for the life of me figure out what it is. So I'm turning to you, serverfault.

    Read the article

  • Cloudify: bootstrap-localcloud: operation failed?

    - by quanta
    OS: Gentoo, CentOS Version: 2.1.0 Follow the quick start guide, I got the below error when running bootstrap-localcloud: cloudify@default> bootstrap-localcloud STARTING CLOUDIFY MANAGEMENT 2012-05-30 14:55:50,396 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; \ Caused by: org.cloudifysource.shell.commands.CLIException: \ Error while starting agent. \ Please make sure that another agent is not already running. Operation failed. What port Cloudify is using to check that agent is running? PS: it's working fine when running on Windows. UPDATE: Wed May 30 22:37:30 ICT 2012 Reply to @tamirkorem and @Itai Frenkel: I'm pretty sure because this is the first time I run that command on 2 servers. More clearly, here're the output: cloudify@default> teardown-localcloud Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? 2012-05-30 22:43:33,145 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown failed. Failed to fetch the currently deployed applications list. For force teardown use the -force flag. Operation failed. cloudify@default> teardown-localcloud -force Teardown will uninstall all of the deployed services. Do you want to continue [y/n]? Failed to fetch the currently deployed applications list. Continuing teardown-localcloud. .2012-05-30 22:46:39,040 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - Teardown aborted, an agent was not found on the local machine. Operation failed. and this one is the detailed result: cloudify@default> bootstrap-localcloud --verbose NIC Address=127.0.0.1 Lookup Locators=127.0.0.1:4172 Lookup Groups=localcloud Starting agent and management processes: gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 STARTING CLOUDIFY MANAGEMENT 2012-05-30 22:36:12,870 WARNING [org.cloudifysource.shell.commands.AbstractGSCommand] - ; Caused by: org.cloudifysource.shell.commands.CLIException: Error while starting agent. Please make sure that another agent is not already running. Command executed: /usr/local/src/gigaspaces-cloudify-2.1.0-ga/bin/gs-agent.sh gsa.global.lus 0 gsa.lus 0 gsa.gsc 0 gsa.global.gsm 0 gsa.gsm_lus 1 gsa.global.esm 0 gsa.esm 1 >/dev/null 2>&1 Reply to @Eliran Malka: there is no such process listening on port 4172: # netstat --protocol=inet -nlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN 2363/tor tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2331/mysqld tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2293/cupsd

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >