Search Results

Search found 16132 results on 646 pages for 'john mark high'.

Page 48/646 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Why do you have to mark a class with the attribute [serializable] ?

    - by Blankman
    Seeing as you can convert any document to a byte array and save it to disk, and then rebuild the file to its original form (as long as you have meta data for its filename etc.). Why do you have to mark a class with [Serializable] etc? Is that just the same idea, "meta data" type information so when you cast the object to its class things are mapped properly?

    Read the article

  • Trying to reconcile global ip address and Vhosts

    - by puk
    I have been using my local machine as a web server for a while, and I have several websites set up locally on my machine, all with similar Vhost files like the one seen here /etc/apache2/sites-available/john.smith.com: <VirtualHost *:80> RewriteEngine on RewriteOptions Inherit ServerAdmin [email protected] ServerName john.smith.com ServerAlias www.john.smith.com DocumentRoot /home/john/smith # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn LogFormat "%v %l %u %t \"%r\" %>s %b" comonvhost CustomLog /var/log/apache2/access.log comonvhost </VirtualHost> then I set up the /etc/hosts file like so for every Vhost: 192.168.1.100 www.john.smith.com john.smith.com 192.168.1.100 www.jane.smith.com jane.smith.com 192.168.1.100 www.joe.smith.com joe.smith.com 192.168.1.100 www.jimbob.smith.com jimbob.smith.com Now I am hosting my friend's website until he gets a permanent domain. I have port forwarding set up to redirect port 80 to my machine, but I don't understand how the global ip fits into all of this. Do I for example use the following web site addresses (assume global ip is 12.34.56.789): 12.34.56.789.john.smith 12.34.56.789.jane.smith 12.34.56.789.joe.smith 12.34.56.789.jimbob.smith

    Read the article

  • High number of ethernet errors. Tool for testing the ethernet card?

    - by Fabio Dalla Libera
    I have an Asus Sabertooth X79. I often get corrupted files. I checked the RAM, but memtest finds no errors. To avoid the possibility of disk errors, I tried copying the files to tmpfs. If I copy from the network, I get md5sum mismatches about once every 10 times using a 6Gb file. Copying from RAM to RAM, I didn't get mismatches. I get a very high number of errors in ifconfig (compared to others PCs I just took as reference, which have 0 with much more traffic). Here is an example RX packets:13972848 errors:200 dropped:0 overruns:0 frame:101 The motherboard is new, but do you think there're some problems with it? What could I use to test the (integrated) network adapter? What else do you think I should double check? --edit-- I tried another NIC, it gives a lot of Corrupted MAC on Input. Disconnecting: Packet corrupt lost connection. I noticed that another PC downloads at 11.1MB/s without problems. This pc at 66.0 MB/s. Is there any way to try to limit the speed?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • How do I make my character slide down high-angled slopes?

    - by keinabel
    I am currently working on my character's movement in Unity3D. I managed to make him move relatively to the mouse cursor. I set a slope limit of 45°, which does not allow the character to walk up the mountains with higher degrees. But he can still jump them up. How do I manage to make him slide down again when he jumped at places with too high slope? Thanks in advance. edit: Code snippet of my basic movement. using UnityEngine; using System.Collections; public class BasicMovement : MonoBehaviour { private float speed; private float jumpSpeed; private float gravity; private float slopeLimit; private Vector3 moveDirection = Vector3.zero; void Start() { PlayerSettings settings = GetComponent<PlayerSettings>(); speed = settings.GetSpeed(); jumpSpeed = settings.GetJumpSpeed(); gravity = settings.GetGravity(); slopeLimit = settings.GetSlopeLimit(); } void Update() { CharacterController controller = GetComponent<CharacterController>(); controller.slopeLimit = slopeLimit; if (controller.isGrounded) { moveDirection = new Vector3(Input.GetAxis("Horizontal"), 0, Input.GetAxis("Vertical")); moveDirection = transform.TransformDirection(moveDirection); moveDirection *= speed; if (Input.GetButton("Jump")) { moveDirection.y = jumpSpeed; } } moveDirection.y -= gravity * Time.deltaTime; controller.Move(moveDirection * Time.deltaTime); } }

    Read the article

  • High-level description of how experimental C++ features are developed?

    - by Praxeolitic
    Herb Sutter in a video answers a question about the concepts proposal considered for C++11 and from his remarks it sounds like multiple groups offered prototype implementations but all of them left concerns about slow compile times. The comment surprised me because it suggests that, at least in some cases, the prototypes being developed are not just proofs of concept -- they're even expected to perform. All the work that must take has me curious. For mature languages, especially C++, how are experimental language features developed? Is it much different from developing a compiler that implements a standard? Does a developer have a sense of if it will work and perform or even if it ever could? What are the most time consuming parts and are any parts surprisingly easier than one might expect? The question is not what does the C++ standards committee do, but rather the part that comes before. When an experimental implementation for a proposal is being put together and there aren't any completely solidified rules, how is the sausage made? I'm not a professional compiler developer nor do I expect answers with step by step accounts. I'd like a high-level idea of how this would be done or if there are any general patterns at all. I don't know what to expect from the answers but even if there are no rules to the process and the small number of people who do this just cowboy it and then, for stuff that worked out, write up the "official version" as a proposal, that answer would still be informative.

    Read the article

  • question about mergesort

    - by davit-datuashvili
    i have write code on mergesort here is code public class mergesort{ public static int a[]; public static void merges(int work[],int low,int high){ if (low==high) return ; else{ int mid=(low+high)/2; merges(work,low,mid); merges(work,mid+1,high); merge(work,low,mid+1,high); } } public static void main(String[]args){ int a[]=new int[]{64,21,33,70,12,85,44,99,36,108}; merges(a,0,a.length-1); for (int i=0;i<a.length;i++){ System.out.println(a[i]); } } public static void merge(int work[],int low,int high,int upper){ int j=0; int l=low; int mid=high-1; int n=upper-l+1; while(low<=mid && high<=upper) if ( a[low]<a[high]) work[j++]=a[low++]; else work[j++]=a[high++]; while(low<=mid) work[j++]=a[low++]; while(high<=upper) work[j++]=a[high++]; for (j=0;j<n;j++) a[l+j]=work[j]; } } but it does nort work after compile this code here is mistake java.lang.NullPointerException at mergesort.merge(mergesort.java:45) at mergesort.merges(mergesort.java:12) at mergesort.merges(mergesort.java:10) at mergesort.merges(mergesort.java:10) at mergesort.merges(mergesort.java:10) at mergesort.main(mergesort.java:27)

    Read the article

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load...

    - by redguy..pl
    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • Very high-pitched noise when computer does something intense?

    - by Starkers
    "Intense" is the best word I can use to describe it because I'm not sure what it is, whether it's RAM, GPU or CPU. If I pan the camera in unity: A high pitched noise issues from the computer. The picosecond I start panning the sound starts. Stops the picosecond I stop panning. If I start an infinite loop: 2.0.0p247 :016 > x = 1 => 1 2.0.0p247 :017 > while x < 2 do 2.0.0p247 :018 > puts 'huzzah!' 2.0.0p247 :019?> end huzzah! huzzah! huzzah! An identical high pitched noise can be heard. I don't think it's the GPU due to this simple experiment. Or any monitor-weirdness (although the sound does sound like one of those old CRT monitors if you're old enough to be young when those things were about) The CPU? Or maybe my SSD? It's my first SSD and the first time I've heard this noise. Should I be worried? Regardless, what's causing this sound? I can't think what would cause such high frequency vibrations. I built the PC myself. Not enough heat paste on the CPU? Too much? Just no idea what's going on. Info: CPU Type QuadCore Intel Core i5-3570K, 3800 MHz (38 x 100) Motherboard Name Asus Maximus V Extreme Flash Memory Type Samsung 21nm TLC NAND Video Adapter Asus HD7770

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Is there a way to mark up code to tell ReSharper not to format it?

    - by adrianbanks
    I quite often use the ReSharper "Clean Up Code" command to format my code to our coding style before checking it into source control. This works well in general, but some bits of code are better formatted manually (eg. because of the indenting rules in ReSharper, things like chained linq methods or multi-line ternary operators have a strange indent that pushes them way to the right). Is there any way to mark up parts of a file to tell ReSharper not to format that area? I'm hoping for some kind of markup similar to how ReSharper suppresses other warnings/features. If not, is there some way of changing a combination of settings to get ReSharper to format the indenting correctly? EDIT: I have found this post from the ReSharper forums that says that generated code sections (as defined in the ReSharper options page) are ignored in code cleanup. Having tried it though, it doesn't seem to get ignored.

    Read the article

  • SQL Joing on a one-to-many relationship

    - by Harley
    Ok, here was my original question; Table one contains ID|Name 1 Mary 2 John Table two contains ID|Color 1 Red 2 Blue 2 Green 2 Black I want to end up with is ID|Name|Red|Blue|Green|Black 1 Mary Y Y 2 John Y Y Y It seems that because there are 11 unique values for color and 1000's upon 1000's of records in table one that there is no 'good' way to do this. So, two other questions. Is there an efficient way to get this result? I can then create a crosstab in my application to get the desired result. ID|Name|Color 1 Mary Red 1 Mary Blue 2 John Blue 2 John Green 2 John Black If I wanted to limit the number of records returned how could I do something like this? Where ((color='blue') AND (color<>'red' OR color<>'green')) So using the above example I would then get back ID|Name|Color 1 Mary Blue 2 John Blue 2 John Black I connect to Visual FoxPro tables via ADODB. Thanks!

    Read the article

  • How to detect if certain characters are at the end of an NSString?

    - by Sheehan Alam
    Let's assume I can have the following strings: "hey @john..." "@john, hello" "@john(hello)" I am tokenizing the string to get every word separated by a space: [myString componentsSeparatedByString:@" "]; My array of tokens now contain: @john... @john, @john(hello) For these cases. How can I make sure only @john is tokenized, while retaining the trailing characters: ... , (hello) Note: I would like to be able to handle all cases of characters at the end of a string. The above are just 3 examples.

    Read the article

  • url with question mark considered as new http request?

    - by Navin Leon
    I am optimization my web page by implementing caching, so if I want the browser not to take data from cache, then I will append a dynamic number as query value. eg: google.com?val=823746 But some time, if I want to bring data from cache for the below url, the browser is making a new http request to server, its not taking data from cache. Is that because of the question mark in URL ? eg: http://google.com? Please provide some reference document link. Thanks in advance. Regards, Navin

    Read the article

  • JavaScript function binding (this keyword) is lost after assignment

    - by Ding
    this is one of most mystery feature in JavaScript, after assigning the object method to other variable, the binding (this keyword) is lost var john = { name: 'John', greet: function(person) { alert("Hi " + person + ", my name is " + this.name); } }; john.greet("Mark"); // Hi Mark, my name is John var fx = john.greet; fx("Mark"); // Hi Mark, my name is my question is: 1) what is happening behind the assignment? var fx = john.greet; is this copy by value or copy by reference? fx and john.greet point to two diferent function, right? 2) since fx is a global method, the scope chain contains only global object. what is the value of this property in Variable object?

    Read the article

  • Is it valid to have more than one question mark in a URL?

    - by Bungle
    I came across the following URL today: http://www.sfgate.com/cgi-bin/blogs/inmarin/detail??blogid=122&entry_id=64497 Notice the doubled question mark at the beginning of the query string: ??blogid=122&entry_id=64497 My browser didn't seem to have any trouble with it, and running a quick bookmarklet: javascript:alert(document.location.search); just gave me the query string shown above. Is this a valid URL? The reason I'm being so pedantic (assuming that I am) is because I need to parse URLs like this for query parameters, and supporting doubled question marks would require some changes to my code. Obviously if they're in the wild, I'll need to support them; I'm mainly curious if it's my fault for not adhering to URL standards exactly, or if it's in fact a non-standard URL.

    Read the article

  • How to configure grails and shiro to mark cookies secure?

    - by j4y
    I'm using Grails 2.2.4 with the Shiro plugin (v1.1.4) and would like to mark the cookies as secure so the session information won't be sent over http. This is the attribute I want to set: securityManager.sessionManager.sessionIdCookie.secure = true The shiro source says to use the Grails bean property override mechanism, which is grails-app/conf/spring/resources.groovy How can I override just the one setting? // If the legacy 'security.shiro.filter.config' option is set, // use our custom INI-based filter... if (application.config.security.shiro.filter.config) { log.warn "security.shiro.filter.config option is deprecated. Use Grails' bean property override mechanism instead." 'filter-class'('org.apache.shiro.grails.LegacyShiroFilter') 'init-param' { 'param-name'('securityManagerBeanName') 'param-value'('shiroSecurityManager') }

    Read the article

  • How to mark a method as "ignore all handled exception" + "step through"? Even when user has selected

    - by Wolf5
    I want to mark a method as "debug step through" even if an exception is thrown (and catched within) the function. This is because 99% of the times I know this function will throw an exception (Assembly.GetTypes), and since this function is in a library I wish to hide this normal exception. (Why did MS not add an exceptionless GetTypes() call?) I have tried this but it still breaks the code when debugging: [DebuggerStepThrough] [DebuggerStepperBoundary] private Type[] GetTypesIgnoreMissing(Assembly ass) { Type[] typs; try { typs = ass.GetTypes(); } catch (ReflectionTypeLoadException ex) { typs = ex.Types; } var newlist = new List<Type>(); foreach (var type in typs) { if (type != null) newlist.Add(type); } return newlist.ToArray(); } Anyone know of a way to make this method 100% stepthrough even if ass.GetTypes() throw an exception in debug mode? It has to step through even when "Break on Thrown exceptions" is on. (I do not need to know I can explicitly choose to ignore that exact type of exception in the IDE)

    Read the article

  • Redircting to a url that has a question mark in it?

    - by dkmojo
    I have a somewhat strange problem. A client has moved their site to Wordpress. They use a service for link exchanges that has a Wordpress plugin. The issue is that the new links pages use a query string to display the correct content and I cannot figure out how to redirect the old URLs correctly. Old URLs look like this: domain.com/link/category-name.html The plugin makes them look like this in WP: domain.com/links/?page=category-name.html How in the world can I get the redirect to work properly? Here's what I have tried: Redirect 301 /link/actors.html http://www.artisticimages.biz/links/?page=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/%3Fpage=actors.html Redirect 301 /link/actors.html http://www.artisticimages.biz/links/\?page=actors.html But none of those have worked. Any help is greatly appreciated!

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • Thunderbird mail client mark of inbox mail unread to read ?

    - by kumar kasimala
    Hi, I am using thunderbird mail client for gmail accounts,Its works fine,enjoying functionality too. I have problem with this mail client that is when I client inbox its has list of unread mails,once viewed it. its become read mail. thats not happening gmail servers.if login in gmail i m still finding unread mails which i read in thuderbird. Please help me how to solve these problem, is there any option or setttings or addons which change status of mail when I read it. Thanks & Regards kumar kasimala. Hyderabad,India.

    Read the article

  • SSH and Active Directory authentication

    - by disserman
    Is it possible to set up Linux (and Solaris) SSH server to authenticate users in this way: i.e. user john is a member of the group Project1_Developers in the Active Directory. we have something on the server A (running Linux, the server has an access to the AD via i.e. LDAP) in the SSH server LDAP (or other module) authentication config like root=Project1_Developers,Company_NIX_Admins. when john connects to the server A using his username "john" and domain password, the server checks the john's group in the domain and if the group is "Project1_Developers" or "Company_NIX_Admins", makes him locally as a root with a root privileges. The idea is also to have only a "root" and a system users on the server, without adding user "john" to all servers where John can log in. Any help or the idea how to make the above or something similar to the above? Preferred using AD but any other similar solution is also possible. p.s. please don't open a discussions is it secure to login via ssh as root or not, thanks :)

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >