Search Results

Search found 4142 results on 166 pages for 'lost hobbit'.

Page 30/166 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • HTML5 on iPhone Safari - data stored by localStorage does not always persist. Why?

    - by Aerodyne
    Hi, I write a simple iPhone web app using HTML5's localStorage. Tests on a 2G device show that data stored using localStorage does not persist after the Safari process is killed although the opened Safari windows are remembered. The data is also lost in a case where I am on a different site on a different Safari window, then I change the window to where the web app in subject is shown. When Safari loads the page it automatically refreshes the page. Then the data is lost. This is a simple test code: <html> <head> <meta name="viewport" content="height=device-height, width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> </head> <body> <script> alert("1:" + localStorage.getItem("test")); localStorage.setItem("test", "123"); alert("2:" + localStorage.getItem("test")); </script> </body> As far as I understand the data should persist! Can anyone shed some light on this behavior? What should I do to get the persistence to work? Thanks! Tom.

    Read the article

  • How to "End Task" not "Kill" or "Terminate"?

    - by Luiscencio
    Hi community. I have a 3G card to provide internet to a remote computer... I have to run a program(provided with the card) to establish the connection... since connections suddenly is lost I wrote a script that Kills the program and reopens it so that the connection is reestablished, there are certain versions of this program that don't kill the connection when killed/terminated, just when closed properly. so I am looking for a script or program that "Properly Closes" a window so I can close it and reopen it in case the connection is lost. this is the code that kills the program Option Explicit Dim objWMIService, objProcess, colProcess Dim strComputer, strProcessKill strComputer = "." strProcessKill = "'Telcel3G.exe'" Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set colProcess = objWMIService.ExecQuery _ ("Select * from Win32_Process Where Name = " & strProcessKill ) For Each objProcess in colProcess objProcess.Terminate() Next WSCript.Echo "Just killed process " & strProcessKill _ & " on " & strComputer WScript.Quit

    Read the article

  • Is OpenID too complicated?

    - by John Leidegren
    I'm beginning to seriously doubt the OpenID community despite that fact that it works. I'm in the process of currently evaluating OpenID as an authentication service for 'this' site and while the promises are great, I just can't get it to work. And I'm really lost. I ask of the SO community to help me out here. Give me answers and show me examples so I can leverage this in the way it was meant to be. My scenario is very typical. I want to authenticate users through a specific Google Apps domain. If you have access to this Google Apps domain, then you have access to my web application. Where I get lost, is all the prerequisites and dependencies involved. What is XRD? What is Yadis? Why do I need XRD and Yadis? What do I need to do to deploy OpenID authentication on my website? Also, this is really important to me. When I login to SO, I use my Google Account. When I click the login button I'm presented with this confirmation page. Where I'm granting SO the right to use my Google Account credentials. Somehow, Google knows that it's "Stackoverflow.com" that's asking me if it's okay to login. And I wish to know what manner of control I have over this little text. I intend to deploy OpenID on several different domains but I would prefer if they would all work without having to be individually configured with special parameters, such as secret API keys and what not. However, I don't know for sure if this is a prerequisite of OpenID, that or the Federated Login API that Google provides.

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • 'The RPC server is unavailable' when converting a physical ISA/Forefront TMG machine to virtual (P2V) in SCVMM

    - by Goran B.
    When I try to convert a physical ISA/TMG machine to virtual using SCVMM, i keep getting an error in the Collect machine configuration step ('Scan Now' button): VMM is unable to complete the request. The connection to the agent MACHINE_NAME was lost. Ensure that the computer MACHINE_NAME exists on the network, WMI service and the agent are installed and running and that a firewall is not blocking HTTP and WMI traffic. ID: 3157 Details: The RPC server is unavailable (0x800706BA) Firewall rules allow for RPC traffic from the SCVMM machine to ISA/TMG machine.

    Read the article

  • ubuntu 12.04 is unresponsive

    - by Andreas Schiemann
    I've been using ubuntu 12.04 for 3 weeks. Today it become unresponsive and I lost stuff I was working on, I had the hand displayed, as if I selected something but in fact never did, but was reading an article online. I finally powered down the machine and then turned it back on. In windows there is the Task Manager where one can see whats taking up all your CPU or RAM, is there a such a program in ubuntu 12.04 and if not, can I install a app that has the same use?

    Read the article

  • Convert info pages to man pages

    - by mbac32768
    I was invited to re-post this question with less opinion, so if it seems familiar, that's why. How can I convert info pages into man pages? I used to have a shell one liner that flattened an entire info document into a single flat page, suitable for navigating with less, but I seem to have lost it. Thanks!

    Read the article

  • Two network adapters on Ubuntu Server 9.10 - Can't have both working at once?

    - by Rob
    I'm trying to set up two network adapters in Ubuntu (server edition) 9.10. One for the public internet, the other a private LAN. During the install, I was asked to pick a primary network adapter (eth0 or eth1). I chose eth0, gave the installer the details listed below in the contents of /etc/network/interfaces, and carried on. I've been using this adapter with these setting for the last few days, and every thing's been fine. Today, I decide it's time to set up the local adapter. I edit the /etc/network/interfaces to add the details for eth1 (see below), and restart networking with sudo /etc/init.d/networking restart. After this, attempting to ping the machine using it's external IP address fails, but I can ping it's local IP address. If I bring eth1 down using sudo ifdown eth1, I can successfully ping the machine via it's external IP address again (but obviously not it's internal IP address). Bringing eth1 back up returns us to the original problem state: external IP not working, internal IP working. Here's my /etc/network/interfaces (I've removed the external IP information, but these settings are unchanged from when it worked) rob@rhea:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary (public) network interface auto eth0 iface eth0 inet static address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # The secondary (private) network interface auto eth1 iface eth1 inet static address 192.168.99.4 netmask 255.255.255.0 network 192.168.99.0 broadcast 192.168.99.255 gateway 192.168.99.254 I then do this: rob@rhea:~$ sudo /etc/init.d/networking restart * Reconfiguring network interfaces... [ OK ] rob@rhea:~$ sudo ifup eth0 ifup: interface eth0 already configured rob@rhea:~$ sudo ifup eth1 ifup: interface eth1 already configured Then, from another machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for [external ip]: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), Back on the Ubuntu server in question: rob@rhea:~$ sudo ifdown eth1 ... and again on the other machine: C:\Documents and Settings\Rob>ping [external ip] Pinging [external ip] with 32 bytes of data: Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Reply from [external ip]: bytes=32 time<1ms TTL=63 Ping statistics for [external ip]: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms So... what am I doing wrong?

    Read the article

  • What features does enabling ACPI 2.0 support in BIOS enable?

    - by glenneroo
    So I was booting up my box and for some odd reason my BIOS had lost settings (again), thus resetting everything to defaults. I was digging around making sure things were configured to my liking and noticed the option to enable ACPI 2.0 support. It was disabled by default but I was wondering: Do I need ACPI 2.0 support? Motherboard is ASUS M3A79-T Deluxe. I should also note that I primarily use Windows-7.

    Read the article

  • HTTPS Stunnel and Haproxy

    - by panalbish
    I am trying to use stunnel infront of Haproxy for SSL support. SSL certificates are located according to stunnel configuration. I am also able to get the https connection, but every time I use https, session get lost. I am not using tomcat 8443 port to get the secure content. Is is possible to get the https connection only using stunnel and haproxy? And my requirement is to have https connection once user get logged in.

    Read the article

  • OneNote 2007 - recommendation(s) of a good place/tutorials to learn features

    - by studiohack23
    I'm pretty familiar with Office 2007, however, I have just recently acquired OneNote 2007. It is such a big and powerful tool, that I'm pretty much lost on the features and how to use it. I don't really know where to start. I'm looking for some recommendation(s) on a good place to learn more about OneNote and its features and what it does...for what it's worth, I'm a student, so student perspectives on how to use/learn OneNote would be awesome! Thanks!

    Read the article

  • Wireless Router will not allow connection

    - by Chris
    I am working with a RangeMax(TM) Wireless Router WPN824 v2, and it will not allow a wireless connection nor a wired connection. I am at an actual lost, with no way into the router I cannot test settings. I have unplugged it, left it off for a 32 hours, and reset it with no luck. Any ideas? Could the router just be done-zo?

    Read the article

  • How do I use the awesome window manager?

    - by Jason Baker
    I've installed awesome on my Ubuntu laptop, and I like it. But I feel kind of lost. I don't know any keyboard shortcuts and the man pages aren't really any help (for instance, what does Mod4 mean?). Is there any kind of brief introduction to awesome I can read?

    Read the article

  • LTO(4) tape shelf life estimation?

    - by emilp
    LTO tapes, Maxell in this case, are often marketed as having 30 years or more shelf life when stored under "optimal conditions" Is there a way to get a good estimation to the shelf life, given parameters such as relative humidity and temperature etc? Obsolescence of the tapes aside, is there a way of determining the impact to shelf life of any deviance from the optimal. In other words how many years are lost when storing say 1 degree above the specified range? regards Emil

    Read the article

  • Removing duplicate files, keeping only the newest file

    - by pinkie_d_pie_0228
    I'm trying to clean up a photo dump folder, in which several files are duplicated but with different filenames or lost in subfolders. I've looked at tools like rmlint, duff and fdupes, but I can't seem to find a way to have them keep only the file with the most recent timestamp. I suspect I have to postprocess the results, but I don't even know where to start to do this. Can anyone guide me on how to get the duplicate files list and delete everything but the newest file?

    Read the article

  • Unable to establish remote access to workplace pc windows 7

    - by sam
    I am successfully connected to my workplace via vpn. But when I try to connect to my pc via remote desktop it keep asking for credentials even I am providing the right one. This works fine in windows xp but unable to connect using win 7. Also after establishing a vpn I lost my internet access. Any idea.

    Read the article

  • Mac OSX Mountain Lion Rails Postgres (wiped out a lot of stuff)

    - by kurtybot
    Having trouble with Postgres since I upgraded to Mountain Lion (and now regretting it). Lost hours trying to fix it to no avail. When running rails server then visiting my app at 0.0.0.0:3000 I get this error. psql: could not connect to server: No such file or directory. Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? tried updating xcode.

    Read the article

  • convert mp4 to flv. Adobe Medio Encoder Failing.

    - by Derek Adair
    Hi, I am trying to encode a .mp4 to a .flv I've tried using the Adobe Media Encoder but the video is lost in the conversion process, leaving me a black screen and some audio. I have yet to find something that can do this successfully for free. I found ffmpeg, which looks promising... but I can't figure out how to get it to work. Any direction would be greatly appreciated. -Thanks.

    Read the article

  • Intermittent loss of internet connectivity with good wireless signal

    - by rsheart
    I recently bought a HP touch smart running windows 7. At random times, internet connectivity is lost while the wireless signal remains strong. The only way to restore connectivity is to restart the PC. Nothing else works. Have many other PC's on the same router also running windows 7 and no problems there. Have changed routers and problem remains. I have reinstalled the network drivers with no help. Any thoughts?

    Read the article

  • Where can I buy a Stereo audio to 3.5mm adapter?

    - by iftrue
    I need a stereo (6.33mm) to PC audio (3.5mm) adapter, and I'd like it to have an inch or two of cable so that yanking the connector doesn't break the audio port the 3.5mm is plugged into. I used to own one of these, but I lost the adapter. Where can I buy something like this online? I can only find solid adapters or 25' cables.

    Read the article

  • Deleting a tag from lots of images at once in Aperture?

    - by Bart B
    Aperture makes it easy to tag lost of pictures at once by just selecting all the images, and the dragging and dropping tags from the tags pallet onto the selected images. But when you need to do the reverse, I can't find a way other than editing each image individually. Is there a way I could select multiple images at once and strip a tag out of all of them? Thanks, Bart.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >