Search Results

Search found 14878 results on 596 pages for 'mod security'.

Page 45/596 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • The use of mod operators in ada

    - by maddy
    Hi all, Can anyone please tell me the usage of the following declarations shown below.I am a beginner in ada language.I had tried the internet but that was not clear enough. type Unsigned_4 is mod 2 ** 4; for Unsigned_4'Size use 4;

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • Is encryption really needed for having network security? [closed]

    - by Cawas
    I welcome better key-wording here, both on tags and title. I'm trying to conceive a free, open and secure network environment that would work anywhere, from big enterprises to small home networks of just 1 machine. I think since wireless Access Points are the most, if not only, true weak point of a Local Area Network (let's not consider every other security aspect of having internet) there would be basically two points to consider here: Having an open AP for anyone to use the internet through Leaving the whole LAN also open for guests to be able to easily read (only) files on it, and even a place to drop files on Considering these two aspects, once everything is done properly... What's the most secure option between having that, or having just an encrypted password-protected wifi? Of course "both" would seem "more secure". But it shouldn't actually be anything substantial. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even consider hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what building walls is for the cables. And giving pass-phrases would be adding a door with a key. So, what do you think?

    Read the article

  • How to disable irritating Office File Validation security alert?

    - by Rabarberski
    I have Microsoft Office 2007 running on Windows 7. Yesterday I updated Office to the latest service pack, i.e. SP3. This morning, when opening an MS Word document (.doc format, and a document I created myself some months ago) I was greeted with a new dialog box saying: Security Alert - Office File Validation WARNING: Office File Validation detected a problem while trying to open this file. Opening this is probably dangerous, and may allow a malicious user to take over your computer. Contact the sender and ask them to re-save and re-send the file. For more security, verify in person or via the phone that they sent the file. Including two links to some microsoft blabla webpage. Obviously the document is safe as I created it myself some months ago. How to disable this irritating dialog box? (On a sidenote, a rethorical question: Will Microsoft never learn? I consider myself a power user in Word, but I have no clue what could be wrong with my document so that it is considered dangerous. Let alone more basic users of Word. Sigh....)

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • Using Plesk for webhosting on Ubuntu - Security risk or reasonably safe?

    - by user66952
    Sorry for this newb-question I'm pretty clueless about Plesk, only have limited debian (without Plesk) experience. If the question is too dumb just telling me how to ask a smarter one or what kind of info I should read first to improve the question would be appreciated as well. I want to offer a program for download on my website hosted on an Ubuntu 8.04.4 VPS using Plesk 9.3.0 for web-hosting. I have limited the ssh-access to the server via key only. When setting up the webhosting with Plesk it created an FTP-login & user is that a potential security risk that could bypass the key-only access? I think Plesk itself (even without the ftp-user-account) through it's web-interface could be a risk is that correct or are my concerns exaggerated? Would you say this solution makes a difference if I'm just using it for the next two weeks and then change servers to a system where I know more about security. 3.In other words is one less likely to get hacked within the first two weeks of having a new site up and running than in week 14&15? (due to occurring in less search results in the beginning perhaps, or for whatever reason... )

    Read the article

  • CIFS - Default security mechanism requested (Mounted Share)

    - by André Faria
    The following message appear every time I reboot/boot my ubuntu 12.04.1 CIFS VFS: default security mechanism requested. The default security mechanism will be upgraded from nbtlm to ntlmv2 in kernel realese 3.3 I'am searching for a solution, if there is one for this message, I really don't understand it. Following my fstab //192.168.0.10/D$/ /mnt/winshare/ cifs user,file_mode=0777,dir_mode=0777,rw,gid=1000,credentials=/root/creds 0 0 I can use my mounted folder with no problem, I just want to know why this message is appearing and if have something that I can do to fix this problem or hide this warning. Thanks

    Read the article

  • Annoying security "feature" in Windows 2008 R2 burns me, but not DVD's

    - by Stan Spotts
    This stuff drives me nuts. I'm all for hardening servers, and reducing security footprints, but I always want the option to allow me to get work done versus securing my system. I use Windows Server 2008 R2 as my laptop OS for a number of reasons I don't need to review here. It's pimped out to work like Windows 7 for most things. But my DVD writer is crippled, and evidently it's on purpose: http://blogs.technet.com/askcore/archive/2010/02/19/windows-server-2008-r2-no-recording-tab-for-cd-dvd-burner.aspx I don't WANT to log in as the local administrator to burn a damned DVD.  WTF isn't this configurable through the registry, or better yet, group policy? There are no security settings that I should not have the option to enable or disable.

    Read the article

  • Oracle SOA Security for OUAF Web Services

    - by Anthony Shorten
    With the ability to use Oracle SOA Suite 11g with the Oracle Utilities Application Framework based products, an additional consideration needs to be configured to ensure correct integration. That additional consideration is security. By default, SOA Suite propagates any credentials from the calling application through to the interfacing applications. In most cases, this behavior is not appropriate as the calling application may use different credential stores and also some interfaces are “disconnected” from a calling application (for example, a file based load using the File Adapter). These situations require that the Web Service calls to the Oracle Utilities Application Framework based products have their own valid credentials. To do this the credentials must be attached at design time or at run time to provide the necessary credentials for the call. There are a number of techniques that can be used to do this: At design time, when integrating a Web Service from an Oracle Utilities Application Framework based product you can attach the security policy “oracle/wss_username_token_client_policy” in the composite.xml view. In this view select the Web Service you want to attach the policy to and right click to display the context menu and select “Configure WS Policies” and select the above policy from the list. If you are using SSL then you can use “oracle/wss_username_token_over_ssl_client_policy” instead. At design time, you can also specify the credential key (csf-key) associated with the above policy by selecting the policy and clicking “Edit Config Override Properties”. You name the key appropriately. Everytime the SOA components are deployed the credential configuration is also sent. You can also do this after deployment, or what I call at “runtime”, by specifying the policy and credential key in the Fusion Middleware Control. Refer to the Fusion Middleware Control documentation on how to do this. To complete the configuration you need to add a map and the key specified earlier to the credential store in the Oracle WebLogic instance used for Oracle SOA Suite. From Fusion Middleware Control, you do this by selecting the domain the SOA Suite is installed in a select “Credentials” from the context menu. You now need to add the credentials by adding the map “oracle.wsm.security” (the name is IMPORTANT) and creating a key with the necessary valid credentials. The example below added a key called “mdm.key”. The name I used is for example only. You can name the key anything you like as long as it corresponds to the key you specified in the design time component. Note: I used SYSUSER as an example credentials in the example, in real life you would use another credential as SYSUSER is not appropriate for production use. This key can be reused for other Oracle Utilities Application Framework Web Service integrations or you can use other keys for individual Web Service calls. Once the key is created and the SOA Suite components deployed the transactions should be able to be called as necessary. If you need to change the password for the credentials it can be done using the Fusion Middleware Control functionality.

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

  • Google I/O 2012 - Security and Privacy in Android Apps

    Google I/O 2012 - Security and Privacy in Android Apps Jon Larimer, Kenny Root Android provides features and APIs that allow development of secure applications, and you should be using them. This session will start with an overview of Android platform security features, then dig into the ways that you can leverage them to protect your users and avoid introducing vulnerabilities. You'll also learn the best practices for protecting user privacy in your apps. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 162 8 ratings Time: 01:01:03 More in Science & Technology

    Read the article

  • Copy machine security issues.

    - by David Nudelman
    I am involved on a project to talk to communities about the risks of posting online content is social networks. But this time I was really impressed how far security concerns can go. This video from CBS news talks about security risks related to corporate fax/printers and scanners. It was very clear that when they got the machines they selected the machines by previous owner and they were not random machines, but still, I will never scan from my company machine again. I guess the price of multifunction printers will go up if this video goes viral. Regards, David Nudelman

    Read the article

  • Oracle Security Webcast Slides and Replay now available

    - by Alex Blyth
    Hi EveryoneThanks for attending the "Oracle Database Security" last week. Slides are available here Oracle Database Security OverviewView more presentations from Oracle Australia. You can download the replay here. Next week's session is on Oracle Application Express. APEX is one of the best kept secrets in the Oracle database and can be used to make very simple apps such as phone directories all the way to complex knowledge base style apps that are driven heavily by data. You can enroll for this session here. Thanks again Cheers Alex

    Read the article

  • Tackling Security and Compliance Barriers with a Platform Approach to IDM: Featuring SuperValu

    - by Darin Pendergraft
    On October 25, 2012 ISACA and Oracle sponsored a webcast discussing how SUPERVALU has embraced the platform approach to IDM.  Scott Bonnell, Sr. Director of Product Management at Oracle, and Phil Black, Security Director for IAM at SUPERVALU discussed how a platform strategy could be used to formulate an upgrade plan for a large SUN IDM installation. See the webcast replay here: ISACA Webcast Replay (Requires Internet Explorer or Chrome) Some of the main points discussed in the webcast include: Getting support for an upgrade project by aligning with corporate initiatives How to leverage an existing IDM investment while planning for future growth How SUN and Oracle IDM architectures can be used in a coexistance strategy Advantages of a rationalized, modern, IDM Platform architecture ISACA Webcast Featuring SuperValu - Tackling Security and Compliance Barriers with a Platform Approach to Identity Management from OracleIDM  

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >