Search Results

Search found 14267 results on 571 pages for 'security certificate'.

Page 19/571 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • InternalsVisibleTo attribute and security vulnerability

    - by Sergey Litvinov
    I found one issue with InternalsVisibleTo attribute usage. The idea of InternalsVisibleTo attribute to allow some other assemblies to use internal classes\methods of this assembly. To make it work you need sign your assemblies. So, if other assemblies isn't specified in main assembly and if they have incorrect public key, then they can't use Internal members. But the issue in Reflection Emit type generation. For example, we have CorpLibrary1 assembly and it has such class: public class TestApi { internal virtual void DoSomething() { Console.WriteLine("Base DoSomething"); } public void DoApiTest() { // some internal logic // ... // call internal method DoSomething(); } } This assembly is marked with such attribute to allow another CorpLibrary2 to make inheritor for that TestAPI and override behaviour of DoSomething method. [assembly: InternalsVisibleTo("CorpLibrary2, PublicKey=0024000004800000940000000602000000240000525341310004000001000100434D9C5E1F9055BF7970B0C106AAA447271ECE0F8FC56F6AF3A906353F0B848A8346DC13C42A6530B4ED2E6CB8A1E56278E664E61C0D633A6F58643A7B8448CB0B15E31218FB8FE17F63906D3BF7E20B9D1A9F7B1C8CD11877C0AF079D454C21F24D5A85A8765395E5CC5252F0BE85CFEB65896EC69FCC75201E09795AAA07D0")] The issue is that I'm able to override this internal DoSomething method and break class logic. My steps to do it: Generate new assembly in runtime via AssemblyBuilder Get AssemblyName from CorpLibrary1 and copy PublikKey to new assembly Generate new assembly that will inherit TestApi class As PublicKey and name of generated assembly is the same as in InternalsVisibleTo, then we can generate new DoSomething method that will override internal method in TestAPI assembly Then we have another assembly that isn't related to this CorpLibrary1 and can't use internal members. We have such test code in it: class Program { static void Main(string[] args) { var builder = new FakeBuilder(InjectBadCode, "DoSomething", true); TestApi fakeType = builder.CreateFake(); fakeType.DoApiTest(); // it will display: // Inject bad code // Base DoSomething Console.ReadLine(); } public static void InjectBadCode() { Console.WriteLine("Inject bad code"); } } And this FakeBuilder class has such code: /// /// Builder that will generate inheritor for specified assembly and will overload specified internal virtual method /// /// Target type public class FakeBuilder { private readonly Action _callback; private readonly Type _targetType; private readonly string _targetMethodName; private readonly string _slotName; private readonly bool _callBaseMethod; public FakeBuilder(Action callback, string targetMethodName, bool callBaseMethod) { int randomId = new Random((int)DateTime.Now.Ticks).Next(); _slotName = string.Format("FakeSlot_{0}", randomId); _callback = callback; _targetType = typeof(TFakeType); _targetMethodName = targetMethodName; _callBaseMethod = callBaseMethod; } public TFakeType CreateFake() { // as CorpLibrary1 can't use code from unreferences assemblies, we need to store this Action somewhere. // And Thread is not bad place for that. It's not the best place as it won't work in multithread application, but it's just a sample LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot(_slotName); Thread.SetData(slot, _callback); // then we generate new assembly with the same nameand public key as target assembly trusts by InternalsVisibleTo attribute var newTypeName = _targetType.Name + "Fake"; var targetAssembly = Assembly.GetAssembly(_targetType); AssemblyName an = new AssemblyName(); an.Name = GetFakeAssemblyName(targetAssembly); // copying public key to new generated assembly var assemblyName = targetAssembly.GetName(); an.SetPublicKey(assemblyName.GetPublicKey()); an.SetPublicKeyToken(assemblyName.GetPublicKeyToken()); AssemblyBuilder assemblyBuilder = Thread.GetDomain().DefineDynamicAssembly(an, AssemblyBuilderAccess.RunAndSave); ModuleBuilder moduleBuilder = assemblyBuilder.DefineDynamicModule(assemblyBuilder.GetName().Name, true); // create inheritor for specified type TypeBuilder typeBuilder = moduleBuilder.DefineType(newTypeName, TypeAttributes.Public | TypeAttributes.Class, _targetType); // LambdaExpression.CompileToMethod can be used only with static methods, so we need to create another method that will call our Inject method // we can do the same via ILGenerator, but expression trees are more easy to use MethodInfo methodInfo = CreateMethodInfo(moduleBuilder); MethodBuilder methodBuilder = typeBuilder.DefineMethod(_targetMethodName, MethodAttributes.Public | MethodAttributes.Virtual); ILGenerator ilGenerator = methodBuilder.GetILGenerator(); // call our static method that will call inject method ilGenerator.EmitCall(OpCodes.Call, methodInfo, null); // in case if we need, then we put call to base method if (_callBaseMethod) { var baseMethodInfo = _targetType.GetMethod(_targetMethodName, BindingFlags.NonPublic | BindingFlags.Instance); // place this to stack ilGenerator.Emit(OpCodes.Ldarg_0); // call the base method ilGenerator.EmitCall(OpCodes.Call, baseMethodInfo, new Type[0]); // return ilGenerator.Emit(OpCodes.Ret); } // generate type, create it and return to caller Type cheatType = typeBuilder.CreateType(); object type = Activator.CreateInstance(cheatType); return (TFakeType)type; } /// /// Get name of assembly from InternalsVisibleTo AssemblyName /// private static string GetFakeAssemblyName(Assembly assembly) { var internalsVisibleAttr = assembly.GetCustomAttributes(typeof(InternalsVisibleToAttribute), true).FirstOrDefault() as InternalsVisibleToAttribute; if (internalsVisibleAttr == null) { throw new InvalidOperationException("Assembly hasn't InternalVisibleTo attribute"); } var ind = internalsVisibleAttr.AssemblyName.IndexOf(","); var name = internalsVisibleAttr.AssemblyName.Substring(0, ind); return name; } /// /// Generate such code: /// ((Action)Thread.GetData(Thread.GetNamedDataSlot(_slotName))).Invoke(); /// private LambdaExpression MakeStaticExpressionMethod() { var allocateMethod = typeof(Thread).GetMethod("GetNamedDataSlot", BindingFlags.Static | BindingFlags.Public); var getDataMethod = typeof(Thread).GetMethod("GetData", BindingFlags.Static | BindingFlags.Public); var call = Expression.Call(allocateMethod, Expression.Constant(_slotName)); var getCall = Expression.Call(getDataMethod, call); var convCall = Expression.Convert(getCall, typeof(Action)); var invokExpr = Expression.Invoke(convCall); var lambda = Expression.Lambda(invokExpr); return lambda; } /// /// Generate static class with one static function that will execute Action from Thread NamedDataSlot /// private MethodInfo CreateMethodInfo(ModuleBuilder moduleBuilder) { var methodName = "_StaticTestMethod_" + _slotName; var className = "_StaticClass_" + _slotName; TypeBuilder typeBuilder = moduleBuilder.DefineType(className, TypeAttributes.Public | TypeAttributes.Class); MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, MethodAttributes.Static | MethodAttributes.Public); LambdaExpression expression = MakeStaticExpressionMethod(); expression.CompileToMethod(methodBuilder); var type = typeBuilder.CreateType(); return type.GetMethod(methodName, BindingFlags.Static | BindingFlags.Public); } } remarks about sample: as we need to execute code from another assembly, CorpLibrary1 hasn't access to it, so we need to store this delegate somewhere. Just for testing I stored it in Thread NamedDataSlot. It won't work in multithreaded applications, but it's just a sample. I know that we use Reflection to get private\internal members of any class, but within reflection we can't override them. But this issue is allows anyone to override internal class\method if that assembly has InternalsVisibleTo attribute. I tested it on .Net 3.5\4 and it works for both of them. How does it possible to just copy PublicKey without private key and use it in runtime? The whole sample can be found there - https://github.com/sergey-litvinov/Tests_InternalsVisibleTo UPDATE1: That test code in Program and FakeBuilder classes hasn't access to key.sn file and that library isn't signed, so it hasn't public key at all. It just copying it from CorpLibrary1 by using Reflection.Emit

    Read the article

  • php security holes POCs

    - by Flavius
    Hi Please provide examples for all of these: XSS, CSRF, SQL injection with both the source code and the attack steps for each. Other attack vectors are welcome. The most complete answer gets a accepted. The configuration is a fairly standard one, as of PHP 5.3.2, core settings: allow_call_time_pass_reference => Off => Off allow_url_fopen => On => On allow_url_include => Off => Off always_populate_raw_post_data => Off => Off arg_separator.input => & => & arg_separator.output => & => & asp_tags => Off => Off auto_append_file => no value => no value auto_globals_jit => On => On auto_prepend_file => no value => no value browscap => no value => no value default_charset => no value => no value default_mimetype => text/html => text/html define_syslog_variables => Off => Off disable_classes => no value => no value disable_functions => no value => no value display_errors => STDOUT => STDOUT display_startup_errors => On => On doc_root => no value => no value docref_ext => no value => no value docref_root => no value => no value enable_dl => Off => Off error_append_string => no value => no value error_log => syslog => syslog error_prepend_string => no value => no value error_reporting => 32767 => 32767 exit_on_timeout => Off => Off expose_php => On => On extension_dir => /usr/lib/php/modules/ => /usr/lib/php/modules/ file_uploads => On => On highlight.bg => <font style="color: #FFFFFF">#FFFFFF</font> => <font style="color: #FFFFFF">#FFFFFF</font> highlight.comment => <font style="color: #FF8000">#FF8000</font> => <font style="color: #FF8000">#FF8000</font> highlight.default => <font style="color: #0000BB">#0000BB</font> => <font style="color: #0000BB">#0000BB</font> highlight.html => <font style="color: #000000">#000000</font> => <font style="color: #000000">#000000</font> highlight.keyword => <font style="color: #007700">#007700</font> => <font style="color: #007700">#007700</font> highlight.string => <font style="color: #DD0000">#DD0000</font> => <font style="color: #DD0000">#DD0000</font> html_errors => Off => Off ignore_repeated_errors => Off => Off ignore_repeated_source => Off => Off ignore_user_abort => Off => Off implicit_flush => On => On include_path => .:/usr/share/pear => .:/usr/share/pear log_errors => On => On log_errors_max_len => 1024 => 1024 magic_quotes_gpc => Off => Off magic_quotes_runtime => Off => Off magic_quotes_sybase => Off => Off mail.add_x_header => On => On mail.force_extra_parameters => no value => no value mail.log => no value => no value max_execution_time => 0 => 0 max_file_uploads => 20 => 20 max_input_nesting_level => 64 => 64 max_input_time => -1 => -1 memory_limit => 128M => 128M open_basedir => no value => no value output_buffering => 0 => 0 output_handler => no value => no value post_max_size => 8M => 8M precision => 14 => 14 realpath_cache_size => 16K => 16K realpath_cache_ttl => 120 => 120 register_argc_argv => On => On register_globals => Off => Off register_long_arrays => Off => Off report_memleaks => On => On report_zend_debug => Off => Off request_order => GP => GP safe_mode => Off => Off safe_mode_exec_dir => no value => no value safe_mode_gid => Off => Off safe_mode_include_dir => no value => no value sendmail_from => no value => no value sendmail_path => /usr/sbin/sendmail -t -i => /usr/sbin/sendmail -t -i serialize_precision => 100 => 100 short_open_tag => Off => Off SMTP => localhost => localhost smtp_port => 25 => 25 sql.safe_mode => Off => Off track_errors => Off => Off unserialize_callback_func => no value => no value upload_max_filesize => 2M => 2M upload_tmp_dir => no value => no value user_dir => no value => no value user_ini.cache_ttl => 300 => 300 user_ini.filename => .user.ini => .user.ini variables_order => GPCS => GPCS xmlrpc_error_number => 0 => 0 xmlrpc_errors => Off => Off y2k_compliance => On => On zend.enable_gc => On => On

    Read the article

  • php security holes Proof-Of-Concept [closed]

    - by Flavius
    Hi Could you show me a Proof-Of-Concept for all of these: XSS, CSRF, SQL injection with both the source code and the attack steps for each? Other attack vectors are welcome. The most complete answer gets accepted. The configuration is a fairly standard one, as of PHP 5.3.2, core settings: allow_call_time_pass_reference => Off => Off allow_url_fopen => On => On allow_url_include => Off => Off always_populate_raw_post_data => Off => Off arg_separator.input => & => & arg_separator.output => & => & asp_tags => Off => Off auto_append_file => no value => no value auto_globals_jit => On => On auto_prepend_file => no value => no value browscap => no value => no value default_charset => no value => no value default_mimetype => text/html => text/html define_syslog_variables => Off => Off disable_classes => no value => no value disable_functions => no value => no value display_errors => STDOUT => STDOUT display_startup_errors => On => On doc_root => no value => no value docref_ext => no value => no value docref_root => no value => no value enable_dl => Off => Off error_append_string => no value => no value error_log => syslog => syslog error_prepend_string => no value => no value error_reporting => 32767 => 32767 exit_on_timeout => Off => Off expose_php => On => On extension_dir => /usr/lib/php/modules/ => /usr/lib/php/modules/ file_uploads => On => On html_errors => Off => Off ignore_repeated_errors => Off => Off ignore_repeated_source => Off => Off ignore_user_abort => Off => Off implicit_flush => On => On include_path => .:/usr/share/pear => .:/usr/share/pear log_errors => On => On log_errors_max_len => 1024 => 1024 magic_quotes_gpc => Off => Off magic_quotes_runtime => Off => Off magic_quotes_sybase => Off => Off mail.add_x_header => On => On mail.force_extra_parameters => no value => no value mail.log => no value => no value max_execution_time => 0 => 0 max_file_uploads => 20 => 20 max_input_nesting_level => 64 => 64 max_input_time => -1 => -1 memory_limit => 128M => 128M open_basedir => no value => no value output_buffering => 0 => 0 output_handler => no value => no value post_max_size => 8M => 8M precision => 14 => 14 realpath_cache_size => 16K => 16K realpath_cache_ttl => 120 => 120 register_argc_argv => On => On register_globals => Off => Off register_long_arrays => Off => Off report_memleaks => On => On report_zend_debug => Off => Off request_order => GP => GP safe_mode => Off => Off safe_mode_exec_dir => no value => no value safe_mode_gid => Off => Off safe_mode_include_dir => no value => no value sendmail_from => no value => no value sendmail_path => /usr/sbin/sendmail -t -i => /usr/sbin/sendmail -t -i serialize_precision => 100 => 100 short_open_tag => Off => Off SMTP => localhost => localhost smtp_port => 25 => 25 sql.safe_mode => Off => Off track_errors => Off => Off unserialize_callback_func => no value => no value upload_max_filesize => 2M => 2M upload_tmp_dir => no value => no value user_dir => no value => no value user_ini.cache_ttl => 300 => 300 user_ini.filename => .user.ini => .user.ini variables_order => GPCS => GPCS xmlrpc_error_number => 0 => 0 xmlrpc_errors => Off => Off y2k_compliance => On => On zend.enable_gc => On => On

    Read the article

  • Flash, parameters, security

    - by Quandary
    Hi, I have a question: In Flash, I have the ability to save certain info onto the server. Now the problem is the user needs to be authenticated as admin in order to do so. I can't use sessions, since if you work longer than 20 minutes in the Flash application, the session is gone. The way I see it, I have 2 possibilities: 1. passing a parameter (bIsAdmin) to Flash from the Website. 2. Launch a http-get request, to get this value (bIsAdmin) from an ashx handler on application startup, when the session has not yet exired. In my opinion, both possibilities are not really secure... So, Which one is safer, 1 or 2? Or does anybody have a better idea ? In my opinion, 1 is safer, because with 2, you can just switch a packet tamperer in between, and bang, you're admin, with permission to save (or overwrite, =delete) anything.

    Read the article

  • Windows Server 2008 R2 creating a multi-year client certificate using the IIS certsrv page while deploying SSTP VPN

    - by Warren P
    I am trying to follow instructions on Technet about deploying a Standard (non-enterprise) SSTP based VPN) that were originally written for Server 2008, but I am using Server 2008 R2, I have gotten as far as the part where it asks you to create a request a Server Authentication certificate. I have deployed IIS, and Active Directory Certificate Services, and chose "Standalone" and "Standard" (non-enterprise) Certificate Authority because I don't have an OID and don't think I should have to get one for a simple deployment of SSTP. The resulting certificates made by the Certification Authority "Issue" command, only have a 1 year period of validity, I want a multi-year certificate. At no point in this process is there any way to input this information unless it's through the Attributes text input area on the Advance Certificate Request page, which appears to be generated using an old ActiveX control, which means I can only do this using the workarounds in the article that I linked at the top, and only using Internet Explorer. Update:: It may be that this question is pointless since self-signed keys do not appear to work, when I try them, using Windows 8 as the VPN client. The problem is that the keys that are self-created by the technique shown here do not have any Certificate Revocation Server URLs and so you get an error "The revocation function was unable to check revocation", and the VPN connection fails.

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • Securely sending data from shared hosted PHP script to local MSSQL

    - by user329488
    I'm trying to add data from a webhook (from a web cart) to a local Microsoft SQL Server. It seems like the best route for me is to use a PHP script to listen for new data (POST as json), parse it, then query to add to MSSQL. I'm not familiar with security concerning the connection between the PHP script (which would sit on a shared-host website) and the local MSSQL database. I would just keep the PHP script running on the same localhost (have Apache running on Windows), but the URI for the webhook needs to be publicly accessible. Alternately, I assume that I could just schedule a script from the localhost to check periodically for updates through the web carts API, though the webhooks seem to be more fool-proof for an amateur programmer like myself. What steps can I take to ensure security when using a PHP on a remote, shared-host to connect to MSSQL on my local machine?

    Read the article

  • Protect Data and Save Money? Learn How Best-in-Class Organizations do Both

    - by roxana.bradescu
    Databases contain nearly two-thirds of the sensitive information that must be protected as part of any organization's overall approach to security, risk management, and compliance. Solutions for protecting data housed in databases vary from encrypting data at the application level to defense-in-depth protection of the database itself. So is there a difference? Absolutely! According to new research from the Aberdeen Group, Best-in-Class organizations experience fewer data breaches and audit deficiencies - at lower cost -- by deploying database security solutions. And the results are dramatic: Aberdeen found that organizations encrypting data within their databases achieved 30% fewer data breaches and 15% greater audit efficiency with 34% less total cost when compared to organizations encrypting data within applications. Join us for a live webcast with Derek Brink, Vice President and Research Fellow at the Aberdeen Group, next week to learn how your organization can become Best-in-Class.

    Read the article

  • Developing payment gateway.

    - by kmaxat
    Hello, I have an idea of developing internet payment gateway similar to PayPal or Webmoney. Since i'm only sophomore at Computer Science, and i've only taken intermediate programming classes, i've no idea where to search for general information about this topic. I do understand that this kind of project is CLEARLY too much to handle for sophomore. Since, it's forum for Pro Webmasters, and probably some of you can point direction of study. What book/source/article would you suggest to read to understand fundamentals of internet payment? What book/source/article would you suggest to read to understand fundamentals internet security? What language is most commonly used for developing payment security of website? I appreciate any help. Thank you.

    Read the article

  • Android: debug certificate expired error

    - by Bill Osuch
    I started up Eclipse today, created a new project, and immediately had an error before I had changed a single line: Error generating final archive: Debug Certificate expired on 11/12/11 When installed, the Android SDK generates a "debug" signing certificate for you in a file called "debug.keystore". Eclipse uses this certificate rather than forcing you to create a new one for every project. In older versions of Eclipse, the certificate was only valid for 365 days, but as I understand it the default has been changed to 30 years in newer versions. If for whatever reason you don't want to upgrade Eclipse, you can manually delete the certificate to for Eclipse to generate a new one. You can find the location in Preferences -> Android -> Build -> Default debug keystore (mine was in C:\Users\myUserName\.android\); just delete the "debug.keystore" file, then go back into Eclipse and Clean the project to generate a new file.

    Read the article

  • how to get rid of certificate error: navigation blocked in ie8

    - by Radek
    when I access our intranet via https I get this "certificate error: navigation blocked" error in IE8 on Windows XP SP3. I can click Continue to this website (not recommended). but I use IE for automation testing so I have to avoid these extra clicks. Any idea? I tried setting “Turn off the Security Settings Check feature” to enabled. setting "Display Mixed Content" to enabled lowering security levels to minimum adding the web server address to trusted zone

    Read the article

  • certificate issues running app in windows 7 ?

    - by Jurjen
    Hi, I'm having some problems with my App. I'm using 'org.mentalis.security' assembly to create a certificate object from a 'pfx' file, this is the line of code where the exception occurs : Certificate cert = Certificate.CreateFromPfxFile(publicKey, certificatePassword); this has always worked and still does in production, but for some reason it throws an exception when run in windows 7 (tried it on 2 machines). CertificateException : Unable to import the PFX file! [error code = -2146893792] I can't find much on this message via google, but when checking the EventViewer I get an 'Audit Failure' every time this exception occurs: Event ID = 5061 Source = Microsoft Windows Security Task Category = system Integrity Keywords = Audit Failure Cryptographic operation. Subject: Security ID: NT AUTHORITY\IUSR Account Name: IUSR Account Domain: NT AUTHORITY Logon ID: 0x3e3 Cryptographic Parameters: Provider Name: Microsoft Software Key Storage Provider Algorithm Name: Not Available. Key Name: VriendelijkeNaam Key Type: User key. Cryptographic Operation: Operation: Open Key. Return Code: 0x2 ` I'm not sure why this isn't working on win 7, I've never had problems when I was running on Vista with this. I am running VS2008 as administrator but I guess that maybe the ASP.NET user doesn't have sufficient rights or something. It's pretty strangs that the 'Algorithm name' is 'Not Available' can anyone help me with this... TIA, Jurjen de Groot

    Read the article

  • Transmission and verification of certificate (openssl) with socket in c

    - by allenzzzxd
    Hello, guys, I have to write these codes in c. I have already generate the certificate of one terminate t1: t1.pem, which is generated by openssl. The communication between the terminates t1 and t2 has been established via socket in c. Now I want to send this certificate to another terminate t2.and I want t2 to receive the certificate, verify it and answer with an acceptance to t1. When t1 get this acceptance, it will the rest of stuffs.. But I don't know how to do these things. For example, I transmit t1.pem as a string? But in t2 side, how can I do to verify? I know there are functions in openssl to do so, but I'm not so clear about it. At last, normally, the acceptance should be like how? @[email protected] lot of questions here.. sorry...if someone could give me some guide.. Thanks a lot in advance!

    Read the article

  • How to give ASP.NET access to a private key in a certificate in the certificate store?

    - by thames
    I have an ASP.NET application that accesses private key in a certificate in the certificates store. On Windows Server 2003 I was able to use winhttpcertcfg.exe to give private key access to the NETWORK SERVICE account. How do I give permissions to access a Private Key in a certificate in the certificate store (Local Computer\Personal) on a Windows Server 2008 R2 in an IIS 7.5 website? I've tried giving Full Trust access to "Everyone", "IIS AppPool\DefaultAppPool", "IIS_IUSRS", and everyother security account I could find using the Certificates MMC (Server 2008 R2). However the below code demonstrates that the code does not have access to the Private Key of a certificate that was imported with the private key. The code instead throws and error everytime the private key property is accessed. Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <%@ Import Namespace="System.Security.Cryptography.X509Certificates" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Repeater ID="repeater1" runat="server"> <HeaderTemplate> <table> <tr> <td> Cert </td> <td> Public Key </td> <td> Private Key </td> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <%#((X509Certificate2)Container.DataItem).GetNameInfo(X509NameType.SimpleName, false) %> </td> <td> <%#((X509Certificate2)Container.DataItem).HasPublicKeyAccess() %> </td> <td> <%#((X509Certificate2)Container.DataItem).HasPrivateKeyAccess() %> </td> </tr> </ItemTemplate> <FooterTemplate> </table></FooterTemplate> </asp:Repeater> </div> </form> </body> </html> Default.aspx.cs using System; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Web.UI; public partial class _Default : Page { public X509Certificate2Collection Certificates; protected void Page_Load(object sender, EventArgs e) { // Local Computer\Personal var store = new X509Store(StoreLocation.LocalMachine); // create and open store for read-only access store.Open(OpenFlags.ReadOnly); Certificates = store.Certificates; repeater1.DataSource = Certificates; repeater1.DataBind(); } } public static class Extensions { public static string HasPublicKeyAccess(this X509Certificate2 cert) { try { AsymmetricAlgorithm algorithm = cert.PublicKey.Key; } catch (Exception ex) { return "No"; } return "Yes"; } public static string HasPrivateKeyAccess(this X509Certificate2 cert) { try { string algorithm = cert.PrivateKey.KeyExchangeAlgorithm; } catch (Exception ex) { return "No"; } return "Yes"; } }

    Read the article

  • Another developer revoked and re-created my client's iOS Distribution Certificate - does this mean I can never update my client's existing app?

    - by Schnapple
    Here is the story so far: A client hired us to do an iPhone app for them. This client had never done an iPhone app before and as part of the arrangement we handled all aspects for them, including app store submission, and we handle some level of future development (new features, bug/security fixes, etc.) We created a Distribution certificate and key pair on the client's behalf We developed the app, published it to the App Store without incident Some time later the client hired a second developer to do a different app for them This second developer, it appears, has revoked the existing Distribution certificate and created a new one with a new key pair on their system This second developer shared the new Distribution certificate and key pair with us for future reference. Due to user error, this new certificate and key pair has now been imported onto the Macintosh where the original certificate and key pair for the original app we developed were created and the originals were not backed up. So we have App #1 on the App Store with Distribution certificate/key pair #1 App #2 either on the App Store or soon to be using Distribution certificate/key pair #2 Distribution certificate/key pair #1 appears to be lost now So my question is: if we ever need to update App #1, will we be able to, using Distribution certificate/key pair #2? Or will we have to upload it as a new app?

    Read the article

  • Spring Security 3.1 xsd and jars mismatch issue

    - by kmansoor
    I'm Trying to migrate from spring framework 3.0.5 to 3.1 and spring-security 3.0.5 to 3.1 (not to mention hibernate 3.6 to 4.1). Using Apache IVY. I'm getting the following error trying to start Tomcat 7.23 within Eclipse Helios (among a host of others, however this is the last in the console): org.springframework.beans.factory.BeanDefinitionStoreException: Line 7 in XML document from ServletContext resource [/WEB-INF/focus-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: Document root element "beans:beans", must match DOCTYPE root "null". org.xml.sax.SAXParseException: Document root element "beans:beans", must match DOCTYPE root "null". my security config file looks like this: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.1.xsd"> Ivy.xml looks like this: <dependencies> <dependency org="org.hibernate" name="hibernate-core" rev="4.1.7.Final"/> <dependency org="org.hibernate" name="com.springsource.org.hibernate.validator" rev="4.2.0.Final" /> <dependency org="org.hibernate.javax.persistence" name="hibernate-jpa-2.0-api" rev="1.0.1.Final"/> <dependency org="org.hibernate" name="hibernate-entitymanager" rev="4.1.7.Final"/> <dependency org="org.hibernate" name="hibernate-validator" rev="4.3.0.Final"/> <dependency org="org.springframework" name="spring-context" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-web" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-tx" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-webmvc" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-test" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-core" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-web" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-config" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-taglibs" rev="3.1.2.RELEASE"/> <dependency org="net.sf.dozer" name="dozer" rev="5.3.2"/> <dependency org="org.apache.poi" name="poi" rev="3.8"/> <dependency org="commons-io" name="commons-io" rev="2.4"/> <dependency org="org.slf4j" name="slf4j-api" rev="1.6.6"/> <dependency org="org.slf4j" name="slf4j-log4j12" rev="1.6.6"/> <dependency org="org.slf4j" name="slf4j-ext" rev="1.6.6"/> <dependency org="log4j" name="log4j" rev="1.2.17"/> <dependency org="org.testng" name="testng" rev="6.8"/> <dependency org="org.dbunit" name="dbunit" rev="2.4.8"/> <dependency org="org.easymock" name="easymock" rev="3.1"/> </dependencies> I understand (hope) this error is due to a mismatch between the declared xsd and the jars on the classpath. Any pointers will be greatly appreciated.

    Read the article

  • Standalone firewall + antivirus or combined security tools?

    - by pukipuki
    For years I'm using some antivirus software and different firewall. Now every antiviruses have got some firewall features and there are complete "internet security" complexes... and every firewall get some antivirus functionality and there are "internet security" versions. Firstly, it is hard and sometimes impossible to install and use standalone AV and FW. Sometimes I can't avoid them (i can't install KAV2010 without removing Outpost firewall etc). Secondly, complex solutions have some disbalance. Farewall from famous antivirus-brand is so user-friendly that is not suitable for me (lack of details in Norton Internet Security for example) and antiviruses from famous firewall-brands are still weak, it is proved by tests. What is today best-practices in case of functionality and security?) Some internet-security complex or two standalone applications from different vendors?

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • Android&ndash;Finding your SDK debug certificate MD5 fingerprint using Keytool

    - by Bill Osuch
    I recently upgraded to a new development machine, which means the certificate used to sign my applications during debug changed. Under most circumstances you’ll never notice a difference, but if you’re developing apps using Google’s Maps API you’ll find that your old API key no longer works with the new certificate fingerprint. Google's instructions walk you through retrieving the MD5 fingerprint of your SDK debug certificate - the certificate that you’re probably signing your apps with before publishing, but it doesn't talk much about the Keytool command. The thing to remember is that Keytool is part of Java, not the Android SDK, so you'll never find it searching through your Android and Eclipse directories. Mine is located in C:\Program Files\Java\jdk1.7.0_02\bin so you should find yours somewhere similar. From a command prompt, navigate to this directory and type: keytool -v -list -keystore "C:/Documents and Settings/<user name>/.android/debug.keystore" That’s assuming the path to your debug certificate is in the typical location. If this doesn’t work, you can find out where it’s located in Eclipse by clicking Window –> Preferences –> Android –> Build. There's no need to use the additional commands shown on Google's page. You'll be prompted for a password, just hit enter. The last line shown, Certificate fingerprint, is the key you'll give Google to generate your new Maps API key. Technorati Tags: Android Mapping

    Read the article

  • Security Pattern to store SSH Keys

    - by Mehdi Sadeghi
    I am writing a simple flask application to submit scientific tasks to remote HPC resources. My application in background talks to remote machines via SSH (because it is widely available on various HPC resources). To be able to maintain this connection in background I need either to use the user's ssh keys on the running machine (when user's have passwordless ssh access to the remote machine) or I have to store user's credentials for the remote machines. I am not sure which path I have to take, should I store remote machine's username/password or should I store user's SSH key pair in database? I want to know what is the correct and safe way to connect to remote servers in background in context of a web application.

    Read the article

  • Online Password Security Tactics

    - by BuckWoody
    Recently two more large databases were attacked and compromised, one at the popular Gawker Media sites and the other at McDonald’s. Every time this kind of thing happens (which is FAR too often) it should remind the technical professional to ensure that they secure their systems correctly. If you write software that stores passwords, it should be heavily encrypted, and not human-readable in any storage. I advocate a different store for the login and password, so that if one is compromised, the other is not. I also advocate that you set a bit flag when a user changes their password, and send out a reminder to change passwords if that bit isn’t changed every three or six months.    But this post is about the *other* side – what to do to secure your own passwords, especially those you use online, either in a cloud service or at a provider. While you’re not in control of these breaches, there are some things you can do to help protect yourself. Most of these are obvious, but they contain a few little twists that make the process easier.   Use Complex Passwords This is easily stated, and probably one of the most un-heeded piece of advice. There are three main concepts here: ·         Don’t use a dictionary-based word ·         Use mixed case ·         Use punctuation, special characters and so on   So this: password Isn’t nearly as safe as this: P@ssw03d   Of course, this only helps if the site that stores your password encrypts it. Gawker does, so theoretically if you had the second password you’re in better shape, at least, than the first. Dictionary words are quickly broken, regardless of the encryption, so the more unusual characters you use, and the farther away from the dictionary words you get, the better.   Of course, this doesn’t help, not even a little, if the site stores the passwords in clear text, or the key to their encryption is broken. In that case…   Use a Different Password at Every Site What? I have hundreds of sites! Are you kidding me? Nope – I’m not. If you use the same password at every site, when a site gets attacked, the attacker will store your name and password value for attacks at other sites. So the only safe thing to do is to use different names or passwords (or both) at each site. Of course, most sites use your e-mail as a username, so you’re kind of hosed there. So even though you have hundreds of sites you visit, you need to have at least a different password at each site.   But it’s easier than you think – if you use an algorithm.   What I’m describing is to pick a “root” password, and then modify that based on the site or purpose. That way, if the site is compromised, you can still use that root password for the other sites.   Let’s take that second password: P@ssw03d   And now you can append, prepend or intersperse that password with other characters to make it unique to the site. That way you can easily remember the root password, but make it unique to the site. For instance, perhaps you read a lot of information on Gawker – how about these:   P@ssw03dRead ReadP@ssw03d PR@esasdw03d   If you have lots of sites, tracking even this can be difficult, so I recommend you use password software such as Password Safe or some other tool to have a secure database of your passwords at each site. DO NOT store this on the web. DO NOT use an Office document (Microsoft or otherwise) that is “encrypted” – the encryption office automation packages use is very trivial, and easily broken. A quick web search for tools to do that should show you how bad a choice this is.   Change Your Password on a Schedule I know. It’s a real pain. And it doesn’t seem worth it…until your account gets hacked. A quick note here – whenever a site gets hacked (and I find out about it) I change the password at that site immediately (or quit doing business with them) and then change the root password on every site, as quickly as I can.   If you follow the tip above, it’s not as hard. Just add another number, year, month, day, something like that into the mix. It’s not unlike making a Primary Key in an RDBMS.   P@ssw03dRead10242010   Change the site, and then update your password database. I do this about once a month, on the first or last day, during staff meetings. (J)   If you have other tips, post them here. We can all learn from each other on this.

    Read the article

  • File Upload Forms: Security

    - by Snow_Mac
    SO I'm building an application for uploading files. We're paying scientists to contribute information on pests, diseases and bugs (for Plants). We need the ability to drag and drop a file to upload it. The question becomes since the users will be authicentated and setup by us, will it be necessarcy to include a virus scanner to prevent the uploading and insertition of malicious files. How important is this?

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • Latest Edition of Security Inside Out Newsletter Now Available

    - by Troy Kitch
    The latest edition of Security Inside Out newsletter is now available. If you don't get this bi-monthly security newsletter in your inbox, then subscribe to get the latest database security news. This bi-monthly edition includes: Q&A: Oracle CSO Mary Ann Davidson on Meeting Tomorrow's Security Threats Oracle Chief Security Officer Mary Ann Davidson shares her thoughts on next-generation security threats.  Read More New Study: Increased Security Spending Still Not Protecting Right Assets Despite widespread belief that database breaches represent the greatest security risk to their business, organizations continue to devote a far greater share of their security resources to network assets rather than database assets, according to a new report issued by CSO and sponsored by Oracle. Read More

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >