Search Results

Search found 14593 results on 584 pages for 'pat inside'.

Page 94/584 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • 1360x768x32 Resolution in Windows 8 in VirtualBox

    - by mbcrump
    My Lenovo ThinkPad's built-in screen maxes at 1366x768x32. I wanted to use that same resolution with Windows 8 Developer Preview inside of VirtualBox. So, what did I do? Downloaded the latest build of VirtualBox v4.1.6 (because it supports Windows 8 x64) Installed Windows 8 Developer Preview in VirtualBox as I did earlier this year. Installed Guest Additions. Ran the CustomVideoMode described in this blog post. …and quickly found out that I didn’t have the option to use 1366x768x32 inside of VirtualBox despite using the following command: VBoxManage.exe setextradata  "[Virtual Machine Name]" CustomVideoMode1 1920x1080x32   So how do you fix it? If you do a little research on this resolution, then you will find it is a non-standard resolution. Even if you run the command: VBoxManage.exe setextradata "[Virtual Machine Name]" CustomVideoMode1 1366x768x32 It will still not show that resolution inside of VirtualBox. You can fix this easily by using the following command as shown below: VBoxManage.exe setextradata "[Virtual Machine Name]" CustomVideoMode1 1360x768x32 I hope that you noticed the command used the resolution of 1360 instead of 1366. Now if you go to your display option for Windows 8 inside of Virtualbox then you can select that resolution. Anyways, I hope this helps someone with a similar problem. I created this blog partially for myself but it is always nice to help my fellow developer.  Thanks for reading. Subscribe to my feed

    Read the article

  • forward same port but for two different IPs (cisco)

    - by Colin
    Hi! I have a cisco running IOS 12.0(25) responding to two different IPs addresses: IP_A and IP_B. Behind this router I also have two different servers: server_A and server_B. What I want is to forward port 22 to both servers, so: IP_A, port22 -> server_A, port22 IP_B, port22 -> server_B, port22 ATM this only works for one of them (server_A), this is my config: interface Ethernet0/0 description Internet ip address IP_A 255.255.255.0 ip address IP_B 255.255.255.0 secondary no ip directed-broadcast ip nat outside no ip mroute-cache no cdp enable ip nat pool pool_A IP_A IP_A netmask 255.255.255.0 ip nat pool pool_B IP_B IP_B netmask 255.255.255.0 ip nat inside source list A pool pool_A overload ip nat inside source list B pool pool_B overload ip nat inside source static tcp server_B 22 IP_B 22 extendable ip nat inside source static tcp server_A 22 IP_A 22 extendable access-list A permit server_A access-list B permit server_B

    Read the article

  • Query specific nameserver for a particular domain upon VPN connect

    - by MT
    Some background: I have a work laptop with Ubuntu 9.10 on it. I have a small network at home where I've been running some basic services (for myself/my family) for 10 some years. In my home network there is a nameserver (Fedora) running Bind 9 with two "views". One view is the "outside" view and it provides name resolution (to the Internet at large) for email, a wiki, and a couple of blogs. The "inside" view provides name resolution (to the internal RFC1918 addresses of theses servers) as well as all the inside hosts, network equipment, ...etc. I connect with an openvpn client to my home network from outside (such as work). What I'd like to be able to do is resolve names on my internal network across this VPN (so I get the RFC1918 "inside" responses) without fully changing my resolver to the DNS server at my hose. For example, if I connect to the VPN from work, I can change my resolver (by editing resolv.conf) to the DNS server at my house (across the VPN) and then successfully resolve all of the inside DNS names on my home network. The issue I have with this is that now I'm no longer able to resolve "inside" names provided by my work's DNS servers (because I'm using my home DNS server). Alternatively, I can connect to the VPN and access my home severs via IP addresses directly, but this is inconvenient and causes issues with Apache name-based hosting (among other things). In the end, the effect I'm trying to achieve is as follows: When I connect to the VPN I automatically start sending DNS requests for *.myhomedomain.com to my home nameserver, but any other requests continue to go the the nameserver I was using before (the one I received on my company LAN via DHCP). When I disconnect the VPN, requests for *.myhomedomain.com go back to the local LAN DNS server (e.g. all requests are going there now). I'm looking for suggestion at to how this can be accomplished.

    Read the article

  • Reverse Proxy Server SSL?

    - by valveLondon
    Context We currently have an Apache web server in the DMZ set up as a reverse proxy and load balancer for two machines running Windows Server 2008 (IIS) inside. The Apache server has a genuine SSL certificate and serves up both http and https, however, the balancer members in the load balancing section are set to: BalancerMember {https://server1} and {https://server2}. The IIS web servers have self-signed certificates in order to respond to the https requests. My question: Do we need to forward any requests from Apache (in the DMZ) to the inside using SSL? e.g can the reverse proxy forward the requests using HTTP? and if so, why would I choose to forward them with SSL? (how secure is the http line between the dmz and the inside); In other words, can I totally disable SSL on my inside web servers?

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Examples of when to use PageAsyncTask (Asynchronous asp.net pages)

    - by Tony_Henrich
    From my understanding from reading about ASP.NET asynchronous pages, the method which executes when the asynchronous task begins ALWAYS EXECUTES between the prerender and the pre-render Complete events. So because the page's controls' events run between the page's load and prerender events, is it true that whatever the begin task handler (handler for BeginAsync below) produces, it can't be used in the controls' events? So for example, if the handler gets data from a database, the data can't be used in any of the controls' postback events? Would you bind data to a data control after prerender? PageAsyncTask pat = new PageAsyncTask(BeginAsync, EndAsync, null, null, true); this.RegisterAsyncTask(pat);

    Read the article

  • How to call WCF Service Method Asycroniously from Class file?

    - by stackuser1
    I've added WCF Service reference to my asp.net application and configured that reference to support asncronious calls. From asp.net code behind files, i'm able to call the service methods asyncroniously like the bellow sample code. protected void Button1_Click(object sender, EventArgs e) { PageAsyncTask pat = new PageAsyncTask(BeiginGetDataAsync, EndDataRetrieveAsync, null, null); Page.RegisterAsyncTask(pat); } IAsyncResult BeiginGetDataAsync(object sender, EventArgs e, AsyncCallback async, object extractData) { svc = new Service1Client(); return svc.BeginGetData(656,async, extractData); } void EndDataRetrieveAsync(IAsyncResult ar) { Label1.Text = svc.EndGetData(ar); } and in page directive added Async="true" In this scenario it is working fine. But from UI i'm not supposed to call the service methods directly. I need to call all service methods from a static class and from code behind file i need to invoke the static method. In this scenario what exactlly do i need to do?

    Read the article

  • How do I set the UI language in vim?

    - by Daren Thomas
    I saw this on reddit, and it reminded me of one of my vim gripes: It shows the UI in German. Damn you, vim! I want English, but since my OS is set up in German (the standard at our office), I guess vim is actually trying to be helpfull. What magic incantations must I perform to get vim to switch the UI language? I have tried googling on various occasions, but can't seem to find an answer (No, Google, you're my friend *pat*, *pat*, but I allready know how to change the syntax highlighting, thank you!)...

    Read the article

  • Python File Search Line And Return Specific Number of Lines after Match

    - by Simos Anderson
    I have a text file that has lines representing some data sets. The file itself is fairly long but it contains certain sections of the following format: Series_Name INFO Number of teams : n1 | Team | # | wins | | TeamName1 | x | y | . . . | TeamNamen1 | numn | numn | Some Irrelevant lines Series_Name2 INFO Number of teams : n1 | Team | # | wins | | TeamName1 | num1 | num2 | . where each section has a header that begins with the Series_Name. Each Series_Name is different. The line with the header also includes the number of teams in that series, n1. Following the header line is a set of lines that represents a table of data. For each series there are n1+1 rows in the table, where each row shows an individual team name and associated stats. I have been trying to implement a function that will allow the user to search for a Team name and then print out the line in the table associated with that team. However, certain team names show up under multiple series. To resolve this, I am currently trying to write my code so that the user can search for the header line with series name first and then print out just the following n1+1 lines that represent the data associated with the series. Here's what I have come up with so far: import re print fname = raw_input("Enter filename: ") seriesname = raw_input("Enter series: ") def findcounter(fname, seriesname): logfile = open(fname, "r") pat = 'INFO Number of teams :' for line in logfile: if seriesname in line: if pat in line: s=line pattern = re.compile(r"""(?P<name>.*?) #starting name \s*INFO #whitespace and success \s*Number\s*of\s*teams #whitespace and strings \s*\:\s*(?P<n1>.*)""",re.VERBOSE) match = pattern.match(s) name = match.group("name") n1 = int(match.group("n1")) print name + " has " + str(n1) + " teams" lcount = 0 for line in logfile: if line.startswith(name): if pat in line: while lcount <= n1: s.append(line) lcount += 1 return result The first part of my code works; it matches the header line that the person searches for, parses the line, and then prints out how many teams are in that series. Since the header line basically tells me how many lines are in the table, I thought that I could use that information to construct a loop that would continue printing each line until a set counter reached n1. But I've tried running it, and I realize that the way I've set it up so far isn't correct. So here's my question: How do you return a number of lines after a matched line when given the number of desired lines that follow the match? I'm new to programming, and I apologize if this question seems silly. I have been working on this quite diligently with no luck and would appreciate any help.

    Read the article

  • Validate zip and display error with onBlur event

    - by phil
    Check if zip is 5 digit number, if not then display 'zip is invalid'. I want to use onBlur event to trigger the display. But it's not working. <script> $(function(){ function valid_zip() { var pat=/^[0-9]{5}$/; if ( !pat.test( $('#zip').val() ) ) {$('#zip').after('<p>zip is invalid</p>');} } }) </script> zip (US only) <input type="text" name='zip' id='zip' maxlength="5" onBlur="valid_zip()">

    Read the article

  • How do I replace NOT EXISTS with JOIN?

    - by YelizavetaYR
    I've got the following query: select distinct a.id, a.name from Employee a join Dependencies b on a.id = b.eid where not exists ( select * from Dependencies d where b.id = d.id and d.name = 'Apple' ) and exists ( select * from Dependencies c where b.id = c.id and c.name = 'Orange' ); I have two tables, relatively simple. The first Employee has an id column and a name column The second table Dependencies has 3 column, an id, an eid (employee id to link) and names (apple, orange etc). the data looks like this Employee table looks like this id | name ----------- 1 | Pat 2 | Tom 3 | Rob 4 | Sam Dependencies id | eid | Name -------------------- 1 | 1 | Orange 2 | 1 | Apple 3 | 2 | Strawberry 4 | 2 | Apple 5 | 3 | Orange 6 | 3 | Banana As you can see Pat has both Orange and Apple and he needs to be excluded and it has to be via joins and i can't seem to get it to work. Ultimately the data should only return Rob

    Read the article

  • Examples of when to use PageAsyncTask (Asynchronous asp.net pages)

    - by Tony_Henrich
    From my understanding from reading about ASP.NET asynchronous pages, the method which executes when the asynchronous task begins ALWAYS EXECUTES between the prerender and the pre-render Complete events. So because the page's controls' events run between the page's load and prerender events, is it true that whatever the begin task handler (handler for BeginAsync below) produces, it can't be used in the controls' events? So for example, if the handler gets data from a database, the data can't be used in any of the controls' postback events? Would you bind data to a data control after prerender? PageAsyncTask pat = new PageAsyncTask(BeginAsync, EndAsync, null, null, true); this.RegisterAsyncTask(pat);

    Read the article

  • Cygwin diff won't exclude files if a directory is included in the pattern

    - by Dean Schulze
    I need to do a recursive diff using cygwin that needs do exclude all .xml files from certain directories, but not from other directories. According to the --help option I should be able to do this with with the --exclude=PAT option where PAT is a pattern describing the files I want to exclude from the diff. If I do this: diff -rw --exclude="classes/Services.xml" the diff does not ignore the Services.xml file in the classes directory. If I do this diff -rw --exclude="Services.xml" the diff does ignore the Services.xml file in all directories. I need to do something like this: diff -rw --exclude="*/generated/resources/client/*.xml" to ignore all .xml files in the directory */generated/resources/client/. Whenever I add path information to the --exclude pattern cygwin does not ignore the file(s) I've specified in the pattern. Is there some way to make cygwin diff recognize a pattern that identifies certain files in certain directories? It seems to not be able to handle any directory information at all.

    Read the article

  • Convert hex to decimal keeping fractional part in Lua

    - by Zack Mulgrew
    Lua's tonumber function is nice but can only convert unsigned integers unless they are base 10. I have a situation where I have numbers like 01.4C that I would like to convert to decimal. I have a crummy solution: function split(str, pat) local t = {} local fpat = "(.-)" .. pat local last_end = 1 local s, e, cap = str:find(fpat, 1) while s do if s ~= 1 or cap ~= "" then table.insert(t,cap) end last_end = e+1 s, e, cap = str:find(fpat, last_end) end if last_end <= #str then cap = str:sub(last_end) table.insert(t, cap) end return t end -- taken from http://lua-users.org/wiki/SplitJoin function hex2dec(hexnum) local parts = split(hexnum, "[\.]") local sigpart = parts[1] local decpart = parts[2] sigpart = tonumber(sigpart, 16) decpart = tonumber(decpart, 16) / 256 return sigpart + decpart end print(hex2dec("01.4C")) -- output: 1.296875 I'd be interested in a better solution for this if there is one.

    Read the article

  • My Java regex isn't capturing the group

    - by Geo
    I'm trying to match the username with a regex. Please don't suggest a split. USERNAME=geo Here's my code: String input = "USERNAME=geo"; Pattern pat = Pattern.compile("USERNAME=(\\w+)"); Matcher mat = pat.matcher(input); if(mat.find()) { System.out.println(mat.group()); } why doesn't it find geo in the group? I noticed that if I use the .group(1), it finds the username. However the group method contains USERNAME=geo. Why?

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • DNS question and Google PageRank from domains

    - by Beck
    I'm not so good at dns at all :) Just basics. A while ago i have noticed, that my blog have different Page Ranks, PR 3 for domain www.example.com and PR 1 for domain example.com. In dns records i have this setup: A - IP - www.example.com A - same IP - example.com Should i replace this record "A - same IP - example.com" with row with CNAME instead of A? Like that: CNAME - same IP - example.com - alias www.example.com Will this combine Page Rank value of both domains? Or i can just create 302 redirection inside .htaccess file, verify example.com(without www) inside Google Webmaster Tools as my domain and inside www.example.com options set as main domain? Thanks ;)

    Read the article

  • ubuntu broke my windows boot

    - by Then Enok
    I was installing ubuntu on pendrive and once finished I needed to run windows a bit, even though I chose erase and install ON THE PENDRIVE it altered my hdd boot sector When I remove the ubu cd and pendrive it should only boot from hdd (windows) but it gives Error: no such device : blablabla(numerbes and letters) Grub rescue _ If I place the pendrive inside it asks me whether to start windows or linux (windows works here) I need to run windows without the pendrive, how can I remove grub from the hdd and also run ubuntu from the pendrive(once I remove grub from the hdd) || THX ArK, your information help me do wonders! :) || Now... it seems that without grub i can not boot the ubuntu from the pendrive anymore, blank screen and nothing loads(i did check the ubuntu with the grub from the hdd, and everything inside it worked perfectly (except the clock, it didnt find my local hour...) ) New problem: it seems that grub which is now on the pendrive is always asking me whether to boot from windows or ubuntu F*ck of course i want to boot ubuntu otherwise i would stick the pen inside the computer

    Read the article

  • Running ADT bundle in Eclipse Indigo

    - by dgood1
    I have installed the ADT bundle for 32 bit. Using Ubuntu 12.04 The instructions said that all I have to do is run the "Eclipse" executable inside the bundle folders. It opens an instance of Eclipse Indigo(? looks different but uses the same launcher icon) and it works fine. However, when I run Eclipse Indigo (previously installed) I don't get the option to make Android apps or Change into perspectives that looked like in ADT bundle. I don't want to add another icon to my launcher. So I'm asking if I can and how to run the ADT bundle inside the Eclipse Indigo. Or at least, make and test Android apps inside Eclipse indigo.

    Read the article

  • Unity3D: default parameters in C# script

    - by Heisenbug
    Accordingly to this thread, it seems that default parameters aren't supported by C# script in Unity3D environment. Declaring an optional parameter in a C# scirpt makes Mono Ide complaint about it: void Foo(int first, int second = 10) // this line is marked as wrong inside Mono IDE Anyway if I ignore the error message of Mono and run the script in Unity, it works without notify any error inside Unity Console. Could anyone clarify a little bit this issue? Particularly: Are default parameters allowed inside C# scripts? If yes, are they supported by all platforms? Why Mono complains about them if the actually works?

    Read the article

  • Super Secret Door Top Stash Hides Your Flash Drive and Cash [DIY]

    - by Jason Fitzpatrick
    Everyone needs a bit of spy-guy fun in their lives (or at least a way to hide your Sailor Moon photo collection from everyone). This clever and extremely well hidden DIY stash puts your contraband inside a door. At Make Projects, the user-contributed project blog at Make magazine, Sean Michael Ragan shares a really stealthy way to hide stuff–stashing it inside the top of the door stop. You’ll need some power tools like a drill, files, and a countersink, as well as a cigar tube for the body of your hidden drop. When you’re done you’ll have an extremely well hidden stash in a place that next to nobody would think to look–inside the top of a door. Hit up the link for a picture-filled step-by-step guide to building your own stash. Door Top Stash [Make Projects] HTG Explains: What Are Character Encodings and How Do They Differ?How To Make Disposable Sleeves for Your In-Ear MonitorsMacs Don’t Make You Creative! So Why Do Artists Really Love Apple?

    Read the article

  • There's no Sound Mixer menu, missing menu option in Sound Recorder

    - by AlexN
    I am using: -Ubuntu 11.10 -Skype -PS3 Eye Toy camera to input video and sound This setup has been properly working in former Ubuntu releases. To use the mic already built in on the PS3 Eye Toy camera I open de Sound Recorder app (notice: not inside Skype, from inside Skype it is not possible to do this) that is included in Gnome and then I go to FileSound Mixer, from this menu I can choose Gnome to get the input audio from the PS3 Eye Toy, instead of from the Audio-In of the computer. Now in Ubuntu 11.10 this Sound Mixer menu inside Sound Recorder is missing, Gnome says something like this: gnome-volume-control is not installed in the proper directory Note: I have tried this on Unity, Unity 2D, Gnome Classic, Gnome Classic 2D and Gnome Shell. In all of them the problem is the same. What can I do? Basically what I want to do is to be able to tell the computer to get the audio in from the PS3 Camera. Thanks in advance.

    Read the article

  • Partitioning with preseed help

    - by kostasp
    I have a server that has 4 hds inside all in stadalone configurations (no hardware raid). I want using preseed to create a "regular" partition on disk1 on which i ll install ubuntu and create a raid 0 array with the remainning three disks. Is this possible? Can i use partman-auto/method twice inside the preseed file once for regular and once for raid? I need to use this for unattended provisioning so i need to set my disks inside the preseed file. Thanking you all in advance for your time. Costas

    Read the article

  • How can I do individual file encryption on Dropbox?

    - by Scaine
    I'd like to set a single directory inside Dropbox in which files are encrypted on a file-by-file basis. At the moment, I use a 2Mb Truecrypt container inside my Dropbox which I then have to mount manually, access/change the files within, then unmount manually. At that point, the entire 2Mb uploads to Dropbox. This is a pain for a number of reasons : Dropbox sync will only occur when the Truecrypt container is unmounted, because Dropbox only syncs files that aren't locked and mounting a container locks it. A single byte change to one file inside that container results in the whole 2Mb being uploaded again. It doesn't scale - I was originally using a 10Mb container, but obviously the bigger the container, the longer it takes to sync when it's unmounted. I was wondering if I can somehow use LUKS to implement file-by-file encryption to get round the "container" issues.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >