Search Results

Search found 14771 results on 591 pages for 'security policy'.

Page 210/591 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • SQL SERVER – Fix – Agent Starting Error 15281 – SQL Server blocked access to procedure ‘dbo.sp_get_sqlagent_properties’ of component ‘Agent XPs’ because this component is turned off as part of the security configuration for this server

    - by Pinal Dave
    SQL Server Agent fails to start because of the error 15281 is a very common error. When you start to restart SQL Agent sometimes it will give following error. SQL Server blocked access to procedure ‘dbo.sp_get_sqlagent_properties’ of component ‘Agent XPs’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Agent XPs’ by using sp_configure. For more information about enabling ‘Agent XPs’, search for ‘Agent XPs’ in SQL Server Books Online. (Microsoft SQL Server, Error: 15281) To resolve this error, following script has to be executed on the server. sp_configure 'show advanced options', 1; GO RECONFIGURE; GO sp_configure 'Agent XPs', 1; GO RECONFIGURE GO When you run above script, it will give a very similar output as following on the screen. Now, if you try to restart SQL Agent it will just work fine. That’s it! Sometimes there is a simpler solution to complicated error. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Server Agent

    Read the article

  • PHP, MySQL: Security concern; Page loads in a weird way

    - by Devner
    Hi all, I am testing the security of my website. I am using the following URL to load a PHP page in my website, on localhost: http://localhost/domain/user/index.php/apple.php When I do this, the page is not loading normally; Instead the images, icons used in the page simply vanish/disappear from the page. Only text appears. And also on any link I click on this page, it brings me to this same page again without navigating to the required page. So if I have hyperlinks to other pages, such as "SEARCH", which points to search.php, instead of navigating to the search.php page, it refreshes the index.php page and just appends the page name of the destination page to the end of the URL. For example, say I used the link above. It then loads the index.php page minus the images at it's will. When I click on the "Search" link to navigate to the search page, I see the following in the URL: http://localhost/domain/user/index.php/search.php I have a redirection configured to a 404 error page in my .htaccess file, but the page does not redirect to the 404 error page. Notice the search.php towards the end of the URL above. Any other link that I click, reloads the index.php page and just appends the destination page name to the end of the URL like I have shown above. I was expecting to see a 404 Error but that does not happen. The URL should not even be able to load the page because I do NOT have a "index.php" folder in my website. What can I do to solve this? All help is appreciated. Thank you.

    Read the article

  • .NET security mechanism to restrict access between two Types in the same project?

    - by jdk
    Question: Is there a mechanism in the .NET Framework to hide one custom Type from another without using separate projects/assemblies? I'm using C# with ASP.NET in a Website project. Note: I'm not talking about access modifiers to hide members of a Type from another type - I mean to hide the Type itself. Background: I'm working in an ASP.NET Website project and the team has decided not to use separate project assemblies for different software layers. Therefore I'm looking for a way to have, for example, a DataAccess/ folder of which I disallow its classes to access other Types in the same ASP.NET Website project. In other words I want to fake the layers and have some kind of security mechanism around each layer to prevent it from accessing another. Obviously there's not a way to enforce this restriction using language-specific OO keywords so I am looking for something else, for example: maybe a permission framework or code access mechanism, maybe something that uses meta data like Attributes. Even something that restricts one namespace from accessing another. I'm unsure the final form it might take. If this were C++ I'd likely be using friend to make as solution, which doesn't translate to C# internal in this case although they're often compared. I don't really care whether the solution actually hides Types from each other or just makes them inaccessible; however I don't want to lock down one Type from all others, another reason access modifiers are not a solution. A runtime or design time answer will suffice. Looking for something easy to implement otherwise it's not worth the effort ...

    Read the article

  • Security benefits from a second opinion, are there flaws in my plan to hash & salt user passwords vi

    - by Tchalvak
    Here is my plan, and goals: Overall Goals: Security with a certain amount of simplicity & database-to-database transferrability, 'cause I'm no expert and could mess it up and I don't want to have to ask a lot of users to reset their passwords. Easy to wipe the passwords for publishing a "wiped" databased of test data. (e.g. I'd like to be able to use a postgresql statement to simply reset all passwords to something simple so that testers can use that testing data for themselves). Plan: Hashing the passwords Account creation records the original email that an account is created with, forever. A global salt is used, e.g. "90fb16b6901dfceb73781ba4d8585f0503ac9391". An account specific salt, the original email the account was created with, is used, e.g. "[email protected]". The users's password is used, e.g. "password123" (I'll be warning against weak passwords in the signup form) The combination of the global salt, account specific salt, and password is hashed via some hashing method in postgresql (haven't been able to find documentation for hashing functions in postgresql, but being able to use sha-2 or something like that would be nice if I could find it). The hash gets stored in the database. Recovering an account To change their password, they have to go through standard password reset (and that reset email gets sent to the original email as well as the most recent account email that they have set). Flaws? Are there any flaws with this that I need to address? And are there best practices to doing hashing fully within postgresql?

    Read the article

  • User to be validated against nested security groups in Windows.

    - by user412272
    Hi, This is my first post here and after much looking around I have come here with my question. Will really appreciate a fast response. I am faced with a problem to validate user credentials of the currently logged on user against a group in Windows. The user membership to a group can be through other groups also ie nested membership. Eg. User U is a part of group G1. Group G1 is a part of another group G2. The requirement is that when the user is validated against group G2, the validations should succeed. The user can be a local or AD user but the group will always be a local group ( or domain local group if created directly on a DC). I have tried using WindowsPrincipal.IsInRole() method, but it seems to be checking only for direct membership to a group. I also tried UserPrincipal.GetAuthorizationGroups() for the current user, but it also doesnt seem to be doing recursive search. I am posting a code snippet of the working code below, but this code is taking much more than acceptable time. bool CheckUserPermissions(string groupName) { WindowsIdentity currentUserIdentity = System.Security.Principal.WindowsIdentity.GetCurrent(); bool found = false; PrincipalContext context= new PrincipalContext(ContextType.Machine); GroupPrincipal group = GroupPrincipal.FindByIdentity(context, IdentityType.Name, groupName); if (group!= null) { foreach (Principal p in group.GetMembers(true)) { if (p.Sid == currentUserIdentity.User) { found = true; break; } } group.Dispose(); } return found; }

    Read the article

  • Task Scheduler Cannot Apply My Changes - Adding a User with Permissions

    - by Aaron
    I can log in to the server using a domain account without administrator privileges and create a task in the Task Scheduler. I am allowed to do an initial save of the task but unable to modify it with the same user account. When changes are complete, a message box prompts for the user password (same domain user I logged in with), then fails with the following message. Task Scheduler cannot apply your changes. The user account is unknown, the password is incorrect, or the account does not have permission to modify the task. When I check Log on as Batch Job Properties (found this from the Help documentation): This policy is accessible by opening the Control Panel, Administrative Tools, and then Local Security Policy. In the Local Security Policy window, click Local Policy, User Rights Assignment, and then Logon as batch job. Everything is grayed out, so I can't add a user. How can I add a user?

    Read the article

  • configure a Cisco ASA to use MS-CHAP v2 for RADIUS authentication

    - by DrStalker
    Cisco ASA5505 8.2(2) Windows 2003 AD server We want to configure our ASA (10.1.1.1) to authenticate remote VPN users through RADIUS on the Windows AD controller (10.1.1.200) We have the following entry on the ASA: aaa-server SYSCON-RADIUS protocol radius aaa-server SYSCON-RADIUS (inside) host 10.1.1.200 key ***** radius-common-pw ***** When I test a login using the account COMPANY\username I see the users credentials are correct in the security log, but I get the following in the windows system logs: User COMPANY\myusername was denied access. Fully-Qualified-User-Name = company.com/CorpUsers/AU/My Name NAS-IP-Address = 10.1.1.1 NAS-Identifier = <not present> Called-Station-Identifier = <not present> Calling-Station-Identifier = <not present> Client-Friendly-Name = ASA5510 Client-IP-Address = 10.1.1.1 NAS-Port-Type = Virtual NAS-Port = 7 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = <undetermined> Policy-Name = VPN Authentication Authentication-Type = PAP EAP-Type = <undetermined> Reason-Code = 66 Reason = The user attempted to use an authentication method that is not enabled on the matching remote access policy. My assumption is that the ASA is using PAP authentication, instead of MS-CHAP v2; the credentials are confirmed, the proper Remote Access Policy is being used, but this policy is set to only allow MS-CHAP2. What do we need to do on the ASA to make it us MS-CHAP v2? In the ADSM GUI The "Microsoft CHAP v2 compatible" tickbox is enabled, but I don't know what this corresponds to in the config.

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • IP6tables blocks INPUT? can't connect with youtube API

    - by klaas
    I thought to have a simple ipv6 firewall, but it turned out to be hell. Somehow I really can't connect with any ipv6 from my machine unless I set INPUT Policy to ACCEPT. Below my current ip6tables ip6tables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all anywhere anywhere state RELATED,ESTABLISHED ACCEPT ipv6-icmp anywhere anywhere ACCEPT tcp anywhere anywhere tcp dpt:http ACCEPT tcp anywhere anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination If I try to connect with any ipv6 adres it doesn't work? telnet gdata.youtube.com 80 Trying 2a00:1450:4013:c00::76... OR telnet gdata.youtube.com 443 Trying 2a00:1450:4013:c00::76... When I set: ip6tables -P INPUT ACCEPT It works.. but then.. well then everything is open? what is going on? Help?

    Read the article

  • Port forwarding DD-WRT

    - by Pawel
    Hi, I'am runing locally service on port 81 (192.168.1.101) I would like to access server from outside MY.WAN.IP.ADDR:81. Everything is working fine on my local network, However can't access it from outside. Below iptables rules on the router. I am using dd-wrt and asus rt-n16 (everything is setup through standard port range forwarding in dd-wrt ) It might be something obvious, but I don't have any experience with routing. Any help will be really appreciated. Thanks. #iptables -t nat -vnL Chain PREROUTING (policy ACCEPT 1285 packets, 148K bytes) pkts bytes target prot opt in out source destination 3 252 DNAT icmp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR to:192.168.1.1 5 300 DNAT tcp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR tcp dpt:81 to:192.168.1.101 0 0 DNAT udp -- * * 0.0.0.0/0 MY.WAN.IP.ADDR udp dpt:81 to:192.168.1.101 298 39375 TRIGGER 0 -- * * 0.0.0.0/0 MY.WAN.IP.ADDR TRIGGER type:dnat match:0 relate:0 Chain POSTROUTING (policy ACCEPT 7 packets, 433 bytes) pkts bytes target prot opt in out source destination 747 91318 SNAT 0 -- * vlan2 0.0.0.0/0 0.0.0.0/0 to:MY.WAN.IP.ADDR 0 0 RETURN 0 -- * br0 0.0.0.0/0 0.0.0.0/0 PKTTYPE = broadcast Chain OUTPUT (policy ACCEPT 86 packets, 5673 bytes) pkts bytes target prot opt in out source destination # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- anywhere anywhere tcp dpt:webcache DROP tcp -- anywhere anywhere tcp dpt:www DROP tcp -- anywhere anywhere tcp dpt:https DROP tcp -- anywhere anywhere tcp dpt:69 DROP tcp -- anywhere anywhere tcp dpt:ssh DROP tcp -- anywhere anywhere tcp dpt:ssh DROP tcp -- anywhere anywhere tcp dpt:telnet DROP tcp -- anywhere anywhere tcp dpt:telnet Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT 0 -- anywhere anywhere TCPMSS tcp -- anywhere anywhere tcp flags:SYN,RST/SYN TCPMSS clamp to PMTU lan2wan 0 -- anywhere anywhere ACCEPT 0 -- anywhere anywhere state RELATED,ESTABLISHED logaccept tcp -- anywhere pawel-ubuntu tcp dpt:81 logaccept udp -- anywhere pawel-ubuntu udp dpt:81 TRIGGER 0 -- anywhere anywhere TRIGGER type:in match:0 relate:0 trigger_out 0 -- anywhere anywhere logaccept 0 -- anywhere anywhere state NEW Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain advgrp_1 (0 references) target prot opt source destination Chain advgrp_10 (0 references) target prot opt source destination Chain advgrp_2 (0 references) target prot opt source destination Chain advgrp_3 (0 references) target prot opt source destination Chain advgrp_4 (0 references) target prot opt source destination Chain advgrp_5 (0 references) target prot opt source destination Chain advgrp_6 (0 references) target prot opt source destination Chain advgrp_7 (0 references) target prot opt source destination Chain advgrp_8 (0 references) target prot opt source destination Chain advgrp_9 (0 references) target prot opt source destination Chain grp_1 (0 references) target prot opt source destination Chain grp_10 (0 references) target prot opt source destination Chain grp_2 (0 references) target prot opt source destination Chain grp_3 (0 references) target prot opt source destination Chain grp_4 (0 references) target prot opt source destination Chain grp_5 (0 references) target prot opt source destination Chain grp_6 (0 references) target prot opt source destination Chain grp_7 (0 references) target prot opt source destination Chain grp_8 (0 references) target prot opt source destination Chain grp_9 (0 references) target prot opt source destination Chain lan2wan (1 references) target prot opt source destination Chain logaccept (3 references) target prot opt source destination ACCEPT 0 -- anywhere anywhere Chain logdrop (0 references) target prot opt source destination DROP 0 -- anywhere anywhere Chain logreject (0 references) target prot opt source destination REJECT tcp -- anywhere anywhere tcp reject-with tcp-reset Chain trigger_out (1 references) target prot opt source destination #iptables -vnL FORWARD Chain FORWARD (policy ACCEPT 130 packets, 5327 bytes) pkts bytes target prot opt in out source destination 15 900 ACCEPT 0 -- br0 br0 0.0.0.0/0 0.0.0.0/0 390 20708 TCPMSS tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 182K 130M lan2wan 0 -- * * 0.0.0.0/0 0.0.0.0/0 179K 129M ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 logaccept tcp -- * * 0.0.0.0/0 192.168.1.101 tcp dpt:81 0 0 logaccept udp -- * * 0.0.0.0/0 192.168.1.101 udp dpt:81 0 0 TRIGGER 0 -- vlan2 br0 0.0.0.0/0 0.0.0.0/0 TRIGGER type:in match:0 relate:0 2612 768K trigger_out 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 2482 762K logaccept 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 state NEW

    Read the article

  • Windows Server 2008 32 bit & windows 7 professional SP1

    - by Harry
    I'm testing my new Windows Server 2008 32 bit edition (2 servers) as a server and Windows 7 professional 32 bit as a client. Let say one is a primary domain controller (PDC) and the other is a backup domain controller (BDC) like the old time to ease. Every setup were done in the PDC and just replicate to BDC. Didn't setup anything, just install the server with AD, DNS, DHCP, that's all. Then I use my windows 7 pro 32 bit to join the domain. It worked. After that I tried to change the password of a the user (not administrator) but it always failed said it didn't meet the password complexity setup while in fact there's no setup at all either in account policy, default domain policy or even local policy. Tried to disable the password complexity in the default domain policy instead of didn't set all then test again but still failed. Browse and found suggestion to setup the minimum and maximum password age to 0 but it also failed. Tried to restart the server and the client then change password, still failed with the same error, didn't meet password complexity setup. Tried to see in the rsop.msc but didn't found anything. In fact, if I see the setup in another system with windows server 2003 and windows xp, using rsop.msc I can see there's setup for computer configuration windows settings security settings account policies password policy. I also have a windows 7 pro 32 bit in a windows server 2003 32 bit environment but unable to find the same setting using rsop but this windows 7 works fine. anyone can give suggestion what's the problem and what to do so I can change my windows 7 pro laptop password in a windows server 2008 environment? another thing, is it the right assumption that we can see all the policies setting in windows 7 whether it's in a windows server 2003 or 2008 environment? thanks.

    Read the article

  • In Windows 7 Home Premium, is it possible to grant a user account the "log on as a service" right and if so, how?

    - by Ryan Johnson
    The title says it all. I need to have the ability for a local user account to log on as a service on a computer running Windows 7 Home Premium. In Windows 7 Ultimate, this is accomplished by going to Control Panel - Administrative Tools - Local Security Policy and adding the user to the "Log on as a service" policy. In Home Premium, there is no Local Security Policy in the Control Panel. Is there another way to add the use to that policy (i.e. registry setting) or is my only recourse to upgrade the computer to Windows 7 Professional? Thanks in Advance, Ryan

    Read the article

  • Remote tunning of jboss using visaulVM

    - by sagarzond
    Hi, I am using visualVM for tunning jboss remotly. I followed following step but unable to get JVM information in visualVM. Start jstatd server on remote machine where jboss running using command jstatd -p 1234 -J-Djava.security.policy=tools.policy In this tools.policy file is added to $JAVA_HOME/bin folder content of tools.policy file is - grant codebase "file:${java.home}/../lib/tools.jar" { permission java.security.AllPermission; }; Start visualVM remote connect on 1234 port using jstat I unable to get information of jboss plz help me........

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • Is there a security risk for allowing people to set their DNS so their own subdomains can be route to my server?

    - by DantheMan
    Lets say that I have a web application, built in Django and deployed with Nginx. Is it a good idea to offer a service that allows customers to request that a subdomain can be pointed at it. I figured this: If I dont allow this, then some companies wont want to access the service from http://mydjangoappmadeupname.com/bigcorporation/ They would rather access it through http://service.bigcorporation.com That would effectively mask that they are using an outside resource. Is there a significant risk that I am overlooking? Also do you think it would be easier to just set things up in Django to handle it, allowing Nginx to accept all domains and then pushing them to Django which would filter out if they are allowed or not, or would it be better to just update my Nginx log each time a client wanted this changed?

    Read the article

  • How to unblock outgoing HTTP and HTTPS traffic in iptables?

    - by EApubs
    With the following iptable rules, I was unable to do an apt update and ping a website. Whats wrong with the rules? How to fix it? What is the exact rule to fix it? Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:325 DROP all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • KVM + Cloudmin + IpTables

    - by Alex
    I have a KVM virtualization on a machine. I use Ubuntu Server + Cloudmin (in order to manage virtual machine instances). On a host system I have four network interfaces: ebadmin@saturn:/var/log$ ifconfig br0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 inet addr:192.168.0.253 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::1278:d2ff:feec:1638/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:589337 errors:0 dropped:0 overruns:0 frame:0 TX packets:334357 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:753652448 (753.6 MB) TX bytes:43385198 (43.3 MB) br1 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16995 errors:0 dropped:0 overruns:0 frame:0 TX packets:13309 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2059264 (2.0 MB) TX bytes:1763980 (1.7 MB) eth0 Link encap:Ethernet HWaddr 10:78:d2:ec:16:38 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:610558 errors:0 dropped:0 overruns:0 frame:0 TX packets:332382 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:769477564 (769.4 MB) TX bytes:44360402 (44.3 MB) Interrupt:20 Memory:fe400000-fe420000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:239632 errors:0 dropped:0 overruns:0 frame:0 TX packets:239632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:50738052 (50.7 MB) TX bytes:50738052 (50.7 MB) tap0 Link encap:Ethernet HWaddr 6e:a4:06:39:26:60 inet6 addr: fe80::6ca4:6ff:fe39:2660/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:17821 errors:0 dropped:0 overruns:0 frame:0 TX packets:13703 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:2370468 (2.3 MB) TX bytes:1782356 (1.7 MB) br0 is connected to a real network, br1 is used to create a private network shared between guest systems. Now I need to configure iptables for network access. First of all I allow ssh sessions on port 8022 on the host system, then I allow all connections in state RELATED, ESTABLISHED. This is working ok. I install another system as guest, it's IP address is 192.168.10.2, and now I have two problems: I want to allow the access from this host to the outside world, cannot accomplish this. I can ssh from the host. I want to be able to ssh to the guest from the outside world using 8023 port. Cannot accomplish this. Full iptables configuration is following: ebadmin@saturn:/var/log$ sudo iptables --list [sudo] password for ebadmin: Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:8022 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere LOG level warning Chain FORWARD (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning Chain OUTPUT (policy ACCEPT) target prot opt source destination LOG all -- anywhere anywhere LOG level warning ebadmin@saturn:/var/log$ sudo iptables -t nat --list Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere anywhere tcp spt:8023 to:192.168.10.2:22 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination The worst of all is that I don't know how to interpret iptables logs. I don't see the final decision of the firewall. Need help urgently.

    Read the article

  • iptables not writing rules.

    - by Darkmage
    im running these two rules as root, but when doing a iptables -L it dosent show any rules, any one have an idea of what the problem can be? iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 --source 84.244.145.135 -j REDIRECT --to-port 1222 iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 --source 243.134.97.194 -j REDIRECT --to-port 1222 duno@Virtual-Box:/home/glennwiz# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Security tips for adding wireless AP to domain network?

    - by Cy
    I am researching best-practices for adding wireless to our existing domain network. My DHCP server is running Windows Server 03 Standard (not sure if thats useful). I am familiar with simple home networking but I thought I'd get some expert advice for the more advanced stuff. Any tips and / or best-practices? Is this Cisco Wireless Access Point a good option? Are there any additional hardware recommendations? Thank you in advance for your help.

    Read the article

  • How do you get around security warnings when redirecting AppData?

    - by Oliver Salzburg
    I've recently set up folder redirection for my user profile in a domain. For now, I've redirected AppData, Desktop, Pictures, Documents and Favorites. So far, so good. But now I've noticed a quite disturbing side effect of the whole thing. Whenever I click any of my pinned elements on the task bar, I get the following warning: The shortcuts get synced as well and are no longer trusted. They're located at \\DFS\UserData\Profiles\OliverSalzburg\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar That seems like it could be a problem when rolling it out to the whole company.

    Read the article

  • How do I Implement VLAN Rate Limiting or QOS for a Cisco 2960?

    - by evolvd
    I have a 2960 that I need to limit the uplink port to 50Mbps for 3 vlans and 350Mbps for another vlan. Would the following config achieve that or is this even possible for the 2960? class-map match-any VLAN50-51-52 match vlan 50-52 class-map match-any VLAN53 match vlan 53 policy-map 50MB_RATE_LIMIT class VLAN50-51-52 police 50000000 5000000 exceed-action drop class VLAN53 police 350000000 35000000 exceed-action drop ! interface GigabitEthernet0/23 service-policy output 50MB_RATE_LIMIT service-policy input 50MB_RATE_LIMIT

    Read the article

  • How to forward OpenVPN Port to NAT'd XEN domU

    - by John
    I want to install a OpenVPN domU on XEN. Dom0 and domU are running Debian Squeeze, all domU are on a NAT'd privat network 10.0.0.1/24 My VPN-Gate is von 10.0.0.1 and running. How can I make it accessible under the dom0 public IP? I tried forwarding the port using iptables, but without any success. Here is what i did: ~ # iptables -L -n -v Chain INPUT (policy ACCEPT 1397 packets, 118K bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 930 packets, 133K bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif5.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif5.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif5.0 0 0 ACCEPT all -- * * 10.0.0.1 0.0.0.0/0 PHYSDEV match --physdev-in vif5.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif3.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif3.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif3.0 0 0 ACCEPT all -- * * 10.0.0.5 0.0.0.0/0 PHYSDEV match --physdev-in vif3.0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in vif2.0 udp spt:68 dpt:67 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED PHYSDEV match --physdev-out vif2.0 0 0 ACCEPT all -- * * 10.0.0.2 0.0.0.0/0 PHYSDEV match --physdev-in vif2.0 147 8236 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 13 546 ACCEPT udp -- eth0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 Chain OUTPUT (policy ACCEPT 1000 packets, 99240 bytes) pkts bytes target prot opt in out source destination ~ # iptables -L -t nat -n -v Chain PREROUTING (policy ACCEPT 324 packets, 23925 bytes) pkts bytes target prot opt in out source destination 139 7824 DNAT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:10.0.0.5:80 1 42 DNAT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 to:10.0.0.1:1194 Chain POSTROUTING (policy ACCEPT 92 packets, 5030 bytes) pkts bytes target prot opt in out source destination 863 64983 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 0 0 MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 180 packets, 13953 bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Computers displaying an unwanted password change prompt

    - by evesirim
    We run a small network of users from a central SBS 2008 server that handles group policy & AD. Most of our users operate under a policy that propts them for a password change every 6 months as a security measure, with a few administrator accounts & terminal machines not using the policy for the sake of ease as they are needed all the time. Recently all machines regardless of policy have started asking for a password change out of schedule. Some PCs run Windows 7 & some XP, though the password prompts don't seem to discriminate between OS. What could this be down to? Many thanks

    Read the article

  • What happens if I run caspol.exe multiple times?

    - by Maclovin
    Hi there! Caspol.exe is used to modify security policy for the machine policy level, the user policy level, and the enterprise policy level. What I use it for, is setting up av trust between the client and an area on some server. I went through the scripts on the server, and found an interesting script that sets up full trust via caspol between a client in one zone, and an application on the server. That script has been running every day, for every logon, since it was implemented. Can someone tell me the consequences? I guess there is about 500 trusts between the client computer and the server, all which points to the same thing.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >