Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 187/886 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • What is the security risk of object reflection?

    - by Legend
    So after a few hours of workaround the limitation of Reflection being currently disabled on the Google App Engine, I was wondering if someone could help me understand why object reflection can be a threat. Is it because I can inspect the private variables of a class or are there any other deeper reasons?

    Read the article

  • Security issue with tiny browser

    - by jasmine
    I have used tinybrowser with tiny mce as a plugin (My panel is php based). When uploading, there is link like this: www.****.com/dashboard/tiny_mce/plugins/tinybrowser/tinybrowser.php?type=image This link can open in all browser without permission. What is the solution in this case? Could I use admin panel's session control in tinyMce plugins?? Thanks in advance

    Read the article

  • Common vulnerabilities for WinForms applications

    - by David Stratton
    I'm not sure if this is on-topic or not here, but it's so specific to .NET WinForms that I believe it makes more sense here than at the Security stackexchange site. (Also, it's related strictly to secure coding, and I think it's as on-topic as any question asking about common website vulnerabiitles that I see all over the site.) For years, our team has been doing threat modeling on Website projects. Part of our template includes the OWASP Top 10 plus other well-known vulnerabilities, so that when we're doing threat modeling, we always make sure that we have a documented process to addressing each of those common vulnerabilities. Example: SQL Injection (Owasp A-1) Standard Practice Use Stored Parameterized Procedures where feasible for access to data where possible Use Parameterized Queries if Stored Procedures are not feasible. (Using a 3rd party DB that we can't modify) Escape single quotes only when the above options are not feasible Database permissions must be designed with least-privilege principle By default, users/groups have no access While developing, document the access needed to each object (Table/View/Stored Procedure) and the business need for access. [snip] At any rate, we used the OWASP Top 10 as the starting point for commonly known vulnerabilities specific to websites. (Finally to the question) On rare occasions, we develop WinForms or Windows Service applications when a web app doesn't meet the needs. I'm wondering if there is an equivalent list of commonly known security vulnerabilities for WinForms apps. Off the top of my head, I can think of a few.... SQL Injection is still a concern Buffer Overflow is normally prevented by the CLR, but is more possible if using non-managed code mixed in with managed code .NET code can be decompiled, so storing sensitive info in code, as opposed to encrypted in the app.config... Is there such a list, or even several versions of such a list, from which we can borrow to create our own? If so, where can I find it? I haven't been able to find it, but if there is one, it would be a great help to us, and also other WinForms developers.

    Read the article

  • Integrating Dynamics CMS with Sharepoint ASCX SecurityException Issue

    - by Gavin
    Hi, I've an ASCX control (WebParts aren't used in this solution) which interrogates CMS 4's data via the API provided by Microsoft.Crm.Sdk and Microsoft.Crm.SdkTypeProxy. The solution works until it's deployed to Sharepoint. Initially I received the following error: [SecurityException: That assembly does not allow partially trusted callers.] MyApp.SharePoint.Web.Applications.MyAppUtilities.RefreshUserFromCrm(String login) +0 MyApp.SharePoint.Web.Applications.MyApp_LoginForm.btnLogin_Click(Object sender, EventArgs e) +30 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 Then I tried wrapping the calling code in the ASCX with SPSecurity.RunWithElevatedPrivileges: SPSecurity.RunWithElevatedPrivileges(delegate() { // FBA user may not exist yet or require refreshing MyAppUtilities.RefreshUserFromCrm(txtUser.Text); }); But this resulted in the following error: [SecurityException: Request for the permission of type 'Microsoft.SharePoint.Security.SharePointPermission, Microsoft.SharePoint.Security, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' failed.] MyApp.SharePoint.Web.Applications.MyApp_LoginForm.btnLogin_Click(Object sender, EventArgs e) +0 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 When I elevate the trust level in the Sharepoint site to full everything works fine, however I need to come up with a solution that uses minimal trust (or a customised minimal trust). I'm also trying to stay clear of adding anything to the GAC. Any ideas? I assume the issue is occuring when trying to call functionality from Microsoft.Crm.* Thanks in advance for any help anyone can provide. Cheers, Gavin

    Read the article

  • Password Cracking Windows Accounts

    - by Kevin
    At work we have laptops with encrypted harddrives. Most developers here (on occasion I have been guilty of it too) leave their laptops in hibernate mode when they take them home at night. Obviously, Windows (i.e. there is a program running in the background which does it for windows) must have a method to unencrypt the data on the drive, or it wouldn't be able to access it. That being said, I always thought that leaving a windows machine on in hibernate mode in a non-secure place (not at work on a lock) is a security threat, because someone could take the machine, leave it running, hack the windows accounts and use it to encrypt the data and steal the information. When I got to thinking about how I would go about breaking into the windows system without restarting it, I couldn't figure out if it was possible. I know it is possible to write a program to crack windows passwords once you have access to the appropriate file(s). But is it possible to execute a program from a locked Windows system that would do this? I don't know of a way to do it, but I am not a Windows expert. If so, is there a way to prevent it? I don't want to expose security vulnerabilities about how to do it, so I would ask that someone wouldn't post the necessary steps in details, but if someone could say something like "Yes, it's possible the USB drive allows arbitrary execution," that would be great! EDIT: The idea being with the encryption is that you can't reboot the system, because once you do, the disk encryption on the system requires a login before being able to start windows. With the machine being in hibernate, the system owner has already bypassed the encryption for the attacker, leaving windows as the only line of defense to protect the data.

    Read the article

  • HTTP requests and Apache modules: Creative attack vectors

    - by pinkgothic
    Slightly unorthodox question here: I'm currently trying to break an Apache with a handful of custom modules. What spawned the testing is that Apache internally forwards requests that it considers too large (e.g. 1 MB trash) to modules hooked in appropriately, forcing them to deal with the garbage data - and lack of handling in the custom modules caused Apache in its entirety to go up in flames. Ouch, ouch, ouch. That particular issue was fortunately fixed, but the question's arisen whether or not there may be other similar vulnerabilities. Right now I have a tool at my disposal that lets me send a raw HTTP request to the server (or rather, raw data through an established TCP connection that could be interpreted as an HTTP request if it followed the form of one, e.g. "GET ...") and I'm trying to come up with other ideas. (TCP-level attacks like Slowloris and Nkiller2 are not my focus at the moment.) Does anyone have a few nice ideas how to confuse the server and/or its modules to the point of self-immolation? Broken UTF-8? (Though I doubt Apache cares about encoding - I imagine it just juggles raw bytes.) Stuff that is only barely too long, followed by a 0-byte, followed by junk? et cetera I don't consider myself a very good tester (I'm doing this by necessity and lack of manpower; I unfortunately don't even have a more than basic grasp of Apache internals that would help me along), which is why I'm hoping for an insightful response or two or three. Maybe some of you have done some similar testing for your own projects? (If stackoverflow is not the right place for this question, I apologise. Not sure where else to put it.)

    Read the article

  • How do I securely authenticate the calling assembly of a WCF service method?

    - by Tim
    The current situation is as follows: We have an production .net 3.5 WCF service, used by several applications throughout the organization, over wsHttpBinding or netTcpBinding. User authentication is being done on the Transport level, using Windows integrated security. This service has a method Foo(string parameter), which can only be called by members of given AD groups. The string parameter is obligatory. A new client application has come into play (.net 3.5, C# console app), which eliminates the necessity of the string parameter. However, only calls from this particular application should be allowed to omit the string parameter. The identity of the caller of the client application should still be known by the server because the AD group limitation still applies (ruling out impersonation on the client side). I found a way to pass on the "evidence" of the calling (strong-named) assembly in the message headers, but this method is clearly not secure because the "evidence" can easily be spoofed. Also, CAS (code access security) seems like a possible solution, but I can't seem to figure out how to make use of CAS in this particular scenario. Does anyone have a suggestion on how to solve this issue? Edit: I found another thread on this subject; apparently the conclusion there is that it is simply impossible to implement in a secure fashion.

    Read the article

  • Relay WCF Service

    - by Matt Ruwe
    This is more of an architectural and security question than anything else. I'm trying to determine if a suggested architecture is necessary. Let me explain my configuration. We have a standard DMZ established that essentially has two firewalls. One that's external facing and the other that connects to the internal LAN. The following describes where each application tier is currently running. Outside the firewall: Silverlight Application In the DMZ: WCF Service (Business Logic & Data Access Layer) Inside the LAN: Database I'm receiving input that the architecture is not correct. Specifically, it has been suggested that because "a web server is easily hacked" that we should place a relay server inside the DMZ that communicates with another WCF service inside the LAN which will then communicate with the database. The external firewall is currently configured to only allow port 443 (https) to the WCF service. The internal firewall is configured to allow SQL connections from the WCF service in the DMZ. Ignoring the obvious performance implications, I don't see the security benefit either. I'm going to reserve my judgement of this suggestion to avoid polluting the answers with my bias. Any input is appreciated. Thanks, Matt

    Read the article

  • WCF Custom Delegation/Authentication without Kerberos

    - by MichaelGG
    I'm building a simple WCF service, probably exposed via HTTPS, using NTLM security. Since not all users are going to be capable of using the service directly, we're writing a simple web front-end for the service. Users will auth with HTML to the web front-end. What we want is a way to delegate the user of the web site all the way to the WCF service. I understand Kerberos delegation can do this, but that's not available to us. What I want to do is make the web front-end account a specially trusted account, so that if a request hits the WCF service authenticated as "DOMAIN\WebApp", we read a WCF message header containing the real identity, then switch the principal to that and continue as normal. Is there any "simple" way of achieving this? Should I give up entirely on this idea, and instead make users "sign-in" to the WCF app and then do complete custom auth? The WCF extensibility and security options seem so vast, I'd like to get a heads up on which path to start heading down.

    Read the article

  • How to detect hidden field tampering?

    - by Myron
    On a form of my web app, I've got a hidden field that I need to protect from tampering for security reasons. I'm trying to come up with a solution whereby I can detect if the value of the hidden field has been changed, and react appropriately (i.e. with a generic "Something went wrong, please try again" error message). The solution should be secure enough that brute force attacks are infeasible. I've got a basic solution that I think will work, but I'm not security expert and I may be totally missing something here. My idea is to render two hidden inputs: one named "important_value", containing the value I need to protect, and one named "important_value_hash" containing the SHA hash of the important value concatenated with a constant long random string (i.e. the same string will be used every time). When the form is submitted, the server will re-compute the SHA hash, and compare against the submitted value of important_value_hash. If they are not the same, the important_value has been tampered with. I could also concatenate additional values with the SHA's input string (maybe the user's IP address?), but I don't know if that really gains me anything. Will this be secure? Anyone have any insight into how it might be broken, and what could/should be done to improve it? Thanks!

    Read the article

  • Isolated storage misunderstand

    - by Costa
    Hi this is a discussion between me and me to understand isolated storage issue. can you help me to convince me about isolated storage!! This is a code written in windows form app (reader) that read the isolated storage of another win form app (writer) which is signed. where is the security if the reader can read the writer's file, I thought only signed code can access the file! If all .Net applications born equal and have all permissions to access Isolated storage, where is the security then? If I can install and run Exe from isolated storage, why I don't install a virus and run it, I am trusted to access this area. but the virus or what ever will not be trusted to access the rest of file system, it only can access the memory, and this is dangerous enough. I cannot see any difference between using app data folder to save the state and using isolated storage except a long nasty path!! I want to try give low trust to Reader code and retest, but they said "Isolated storage is actually created for giving low trusted application the right to save its state". Reader code: private void button1_Click(object sender, EventArgs e) { String path = @"C:\Documents and Settings\All Users\Application Data\IsolatedStorage\efv5cmbz.ewt\2ehuny0c.qvv\StrongName.5v3airc2lkv0onfrhsm2h3uiio35oarw\AssemFiles\toto12\ABC.txt"; StreamReader reader = new StreamReader(path); var test = reader.ReadLine(); reader.Close(); } Writer: private void button1_Click(object sender, EventArgs e) { IsolatedStorageFile isolatedFile = IsolatedStorageFile.GetMachineStoreForAssembly(); isolatedFile.CreateDirectory("toto12"); IsolatedStorageFileStream isolatedStorage = new IsolatedStorageFileStream(@"toto12\ABC.txt", System.IO.FileMode.Create, isolatedFile); StreamWriter writer = new StreamWriter(isolatedStorage); writer.WriteLine("Ana 2akol we ashrab kai a3eesh wa akbora"); writer.Close(); writer.Dispose(); }

    Read the article

  • AspNetMembership provider with WCF service

    - by Sly
    I'm trying to configure AspNetMembershipProvider to be used for authenticating in my WCF service that is using basicHttpBinding. I have following configuration: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <bindings> <basicHttpBinding> <binding name="basicSecureBinding"> <security mode="Message"></security> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyApp.Services.ComputersServiceBehavior"> <serviceAuthorization roleProviderName="AspNetSqlRoleProvider" principalPermissionMode="UseAspNetRoles" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="MembershipProvider" membershipProviderName="AspNetSqlMembershipProvider"/> </serviceCredentials> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MyApp.Services.ComputersServiceBehavior" name="MyApp.Services.ComputersService"> <endpoint binding="basicHttpBinding" contract="MyApp.Services.IComputersService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> Roles are enabled and membership provider is configured (its working for web site). But authentication process is not fired at all. There is no calles to data base during request, and when I try to set following attribute on method: [PrincipalPermission(SecurityAction.Demand, Authenticated = true)] public bool Test() { return true; } I'm getting access denied exception. Any thoughts how to fix it?

    Read the article

  • OAuth secrets in mobile apps

    - by Felixyz
    When using the OAuth protocol, you need a secret string obtained from the service you want to delegate to. If you are doing this in a web app, you can simply store the secret in your data base or on the file system, but what is the best way to handle it in a mobile app (or a desktop app for that matter)? Storing the string in the app is obviously not good, as someone could easily find it and abuse it. Another approach would be to store it on you server, and have the app fetch it on every run, never storing it on the phone. This is almost as bad, because you have to include the URL in the app. I don't believe using https is any help. The only workable solution I can come up with is to first obtain the Access Token as normal (preferably using a web view inside the app), and then route all further communication through our server, where a script would append the secret to the request data and communicates with the provider. Then again, I'm a security noob, so I'd really like to hear some knowledgeable peoples' opinions on this. It doesn't seem to me that most apps are going to these lengths to guarantee security (for example, Facebook Connect seems to assume that you put the secret into a string right in your app). Another thing: I don't believe the secret is involved in initially requesting the Access Token, so that could be done without involving our own server. Am I correct?

    Read the article

  • What harm can javascript do?

    - by The King
    I just happen to read the joel's blog here... So for example if you have a web page that says “What is your name?” with an edit box and then submitting that page takes you to another page that says, Hello, Elmer! (assuming the user’s name is Elmer), well, that’s a security vulnerability, because the user could type in all kinds of weird HTML and JavaScript instead of “Elmer” and their weird JavaScript could do narsty things, and now those narsty things appear to come from you, so for example they can read cookies that you put there and forward them on to Dr. Evil’s evil site. Since javascript runs on client end. All it can access or do is only on the client end. It can read informations stored in hidden fields and change them. It can read, write or manipulate cookies... But I feel, these informations are anyway available to him. (if he is smart enough to pass javascript in a textbox. So we are not empowering him with new information or providing him undue access to our server... Just curious to know whether I miss something. Can you list the things that a malicious user can do with this security hole. Edit : Thanks to all for enlightening . As kizzx2 pointed out in one of the comments... I was overlooking the fact that a JavaScript written by User A may get executed in the browser of User B under numerous circumstances, in which case it becomes a great risk.

    Read the article

  • Simple imeplementation of admin/staff panel?

    - by Michael Mao
    Hi all: A new project requires a simple panel(page) for admin and staff members that : Preferably will not use SSL or any digital ceritification stuff, a simple login from via http will just be fine. has basic authentication which allows only admin to login as admin, and any staff member as of the group "staff". Ideally, the "credentials(username-hashedpassword pair)" will be stored in MySQL. is simple to configure if there is a package, or the strategy is simple to code. somewhere (PHP session?) somehow (include a script at the beginning of each page to check user group before doing anything?), it will detect any invalid user attempt to access protected page and redirect him/her to the login form. while still keeps high quality in security, something I worry about the most. Frankly I am having little knowledge about Internet security, and how modern CMS such as WordPress/Joomla do with their implementation in this. I only have one thing in my mind that I need to use a salt to hash the password (SHA1?) to make sure any hacker gets the username and password pair across the net cannot use that to log into the system. And that is what the client wants to make sure. But I really not sure where to start, any ideas? Thanks a lot in advance.

    Read the article

  • How to do a javascript redirection to a ClickOnce deployment URL?

    - by jerem
    I have a ClickOnce application used to view some documents on a website. When connected, the user sees a list of documents as links to http://server/myapp.application?document=docname. It worked fine until I had to integrate the website authentication/security system into my application. The website uses a ticketing system to grant access to its users. A ticket is generated by a web application and needs to be added to the deployment URL querystring, then I have to check at application startup that the ticket given in querystring was right by making another request to the web application. So the deployment URL becomes something like: h ttp://server/myapp.application?document=docname&ticket=ticketnumber. The problem is the ticket is valid only 10 seconds, so I have to get it only after the user has clicked a link. My first try was to have some javascript do the request to get the ticket, generate the proper deployment URL and then redirect the user to this URL with "window.location = deploymentUrl;". It works fine in Firefox, but IE does not prompt the user for installation. I guess it is a ClickOnce security constraints, but I am able to do the redirection when doing it on localhost, so I hope there is a workaround. I have also added the server on the "trusted sites" list in IE options. Is it possible to have that working in IE? What are my other options to do that?

    Read the article

  • SecurityManager StackOverflowError

    - by Tom Brito
    Running the following code, I get a StackOverflowError at the getPackage() line. How can I grant permission just to classes inside package I want, if I can't access the getPackage() to check the package? package myPkg.security; import java.security.Permission; import javax.swing.JOptionPane; public class SimpleSecurityManager extends SecurityManager { @Override public void checkPermission(Permission perm) { Class<?>[] contextArray = getClassContext(); for (Class<?> c : contextArray) { checkPermission(perm, c); } } @Override public void checkPermission(Permission perm, Object context) { if (context instanceof Class) { Class clazz = (Class) context; Package pkg = clazz.getPackage(); // StackOverflowError String name = pkg.getName(); if (name.startsWith("java.")) { // permission granted return; } if (name.startsWith("sun.")) { // permission granted return; } if (name.startsWith("myPkg.")) { // permission granted return; } } // permission denied throw new SecurityException("Permission denied for " + context); } public static void main(String[] args) { System.setSecurityManager(new SimpleSecurityManager()); JOptionPane.showMessageDialog(null, "test"); } }

    Read the article

  • ASP.NET Membership C# - How to compare existing password/hash

    - by Steve
    I have been on this problem for a while. I need to compare a paasword that the user enters to a password that is in the membership DB. The password is hashed and has a salt. Because of the lack of documentation I do not know if the salt is append to the password and then hashed how how it is created. I am unable to get this to match. The hash returned from the function never matches the hash in the DB and I know for fact it is the same password. Microsoft seems to hash the password in a different way then I am. I hope someone has some insights please. Here is my code: protected void Button1_Click(object sender, EventArgs e) { //HERE IS THE PASSWORD I USE, SAME ONE IS HASHED IN THE DB string pwd = "Letmein44"; //HERE IS THE SALT FROM THE DB string saltVar = "SuY4cf8wJXJAVEr3xjz4Dg=="; //HERE IS THE PASSWORD THE WAY IT STORED IN THE DB AS HASH string bdPwd = "mPrDArrWt1+tybrjA0OZuEG1P5w="; // FOR COMPARISON I DISPLAY IT TextBox1.Text = bdPwd; // HERE IS WHERE I DISPLAY THE return from THE FUNCTION, IT SHOULD MATCH THE PASSWORD FROM THE DB. TextBox2.Text = getHashedPassUsingUserIdAsSalt(pwd, saltVar); } private string getHashedPassUsingUserIdAsSalt(string vPass, string vSalt) { string vSourceText = vPass + vSalt; System.Text.UnicodeEncoding vUe = new System.Text.UnicodeEncoding(); byte[] vSourceBytes = vUe.GetBytes(vSourceText); System.Security.Cryptography.SHA1CryptoServiceProvider vSHA = new System.Security.Cryptography.SHA1CryptoServiceProvider(); byte[] vHashBytes = vSHA.ComputeHash(vSourceBytes); return Convert.ToBase64String(vHashBytes); }

    Read the article

  • What are the most time consuming checks performed by .NET when executing a managed appplication?

    - by ltorje
    I've developed a .NET based Windows service that uses part managed (C#) and unmanaged code (C/C++ libraries). In some domain environments (e.g. Win 2k3 32bit server inside domain abc.com) sometimes the service takes more than 30 seconds to start (especially on OS restart), thus failing to start the service. I suspect that it has something to do with enterprise level security but I do not know for sure. http://msdn.microsoft.com/en-us/library/aa720255%28VS.71%29.aspx I've tried the following without success: - delay loading references by moving the using directives as far as possible from the servicebase implementation (especially the xml namespace - know to cause delays in loading) - delay loading and configuring log4net - precompiling the code by using ngen - delaying the start of the worker thread - add/remove manifest + decencies set inside - sign/unsign the binaries - use the configuration settings (there are a lot of settings and the scope level for all is set to application ) as later as possible - add all dependencies to GAC I didn't tried yet to add security demands for the class that has the Main method implemented. I didn't tries to implement my own configuration loader because after inspecting the autogenerated code, I've noticed that the setting class is a singletone and it gets its instance on call. By completely removing the log4net dependency it worked, but this is not an option. When the network card is disabled the service starts immediately. Any suggestions/comments/solution you have would be most welcomed.

    Read the article

  • Flash doesn't connect to socket even though policy allows it

    - by Bart van Heukelom
    In my Flash app, I'm connecting to my server like this: Security.loadPolicyFile("xmlsocket://example.com:12860"); socket = new Socket("example.com", 12869); socket.writeByte(...); ... socket.flush(); At port 12860 I'm running a socket policy server, which (according to this document) correctly serves up my policy like this: 00000000 3c 70 6f 6c 69 63 79 2d 66 69 6c 65 2d 72 65 71 <policy- file-req 00000010 75 65 73 74 2f 3e 00 uest/>. 00000000 3c 63 72 6f 73 73 2d 64 6f 6d 61 69 6e 2d 70 6f <cross-d omain-po 00000010 6c 69 63 79 3e 3c 73 69 74 65 2d 63 6f 6e 74 72 licy><si te-contr 00000020 6f 6c 20 70 65 72 6d 69 74 74 65 64 2d 63 72 6f ol permi tted-cro 00000030 73 73 2d 64 6f 6d 61 69 6e 2d 70 6f 6c 69 63 69 ss-domai n-polici 00000040 65 73 3d 22 6d 61 73 74 65 72 2d 6f 6e 6c 79 22 es="mast er-only" 00000050 20 2f 3e 3c 61 6c 6c 6f 77 2d 61 63 63 65 73 73 /><allo w-access 00000060 2d 66 72 6f 6d 20 64 6f 6d 61 69 6e 3d 22 2a 22 -from do main="*" 00000070 20 74 6f 2d 70 6f 72 74 73 3d 22 31 32 38 36 39 to-port s="12869 00000080 22 20 2f 3e 3c 2f 63 72 6f 73 73 2d 64 6f 6d 61 " /></cr oss-doma 00000090 69 6e 2d 70 6f 6c 69 63 79 3e 00 in-polic y>. I get no security warnings, which I used to get before the policy server was in place. Still, the connection to port 12869 doesn't work. It's made (I can see with Wireshark and on the server), but no data is sent by Flash. It might be worth knowing that the SWF itself is served from example.com as well.

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • What are the common compliance standards for software products?

    - by Jay
    This is a very generic question about software products. I would like to know what compliance standards are applicable to any software product. I know that question gives away nothing. So, here is an example to what I am referring to. CiSecurity Security Certification/Compliance lists out products ceritified by them to be compliant to the standards published at their website, i.e, cisecurity.org. Compliance could be as simple as answering a questionnaire for your product and approved by a thirdparty like cisecurity or it could apply to your whole organization, for instance, PCI-DSS compliance. I would be very interested in knowing the standards that products you know/designed/created, comply to. To give you the context behind this question: I am the developer of a data-masking tool. The said tool helps mask onscreen html text in a banking web application using filters. So, for instance, if the bank application lists out user information with ssn, my product when integrated with the banking product, automatically identifies ssn pattern and masks it into a pre-defined format.So, I have my product marketing team wanting more buzz words like compliance to be able to sell it to more banking clients. Hence, understanding "compliances that apply to products" is a key research item for me at this point. By which I meant, security compliances. Appreciate all your help and suggestions.

    Read the article

  • Still don't understand file upload-folder permissions

    - by Camran
    I have checked out articles and tutorials. I don't know what to do about the security of my picture upload-folder. It is pictures for classifieds which should be uploaded to the folder. This is what I want: Anybody may upload images to the folder. The images will be moved to another folder, by another php-code later on (automatic). Only I may manually remove them, as well as another php file on the server which automatically empties the folder after x-days. What should I do here? The images are uploaded via a php-upload script. This script checks to see if the extension of the file is actually a valid image-file. When I try this: chmod 755 images the images wont be uploaded. But like this it works: chmod 777 images But 777 is a security risk right? Please give me detailed information... The Q is, what to do to solve this problem, not info about what permissions there are etc etc... Thanks If you need more info let me know...

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >