Search Results

Search found 13810 results on 553 pages for 'security roles'.

Page 68/553 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • MODX based site has been compromised, and tagged by Google as malware

    - by JAG2007
    I'm the webmaster (inherited the site from the developer) for a site called kenbrook.org. The site is currently being tagged as malware infected by Google, and gives the following details: http://www.google.com/safebrowsing/diagnostic?site=kenbrook.org Sadly, this is the second time it has occurred. I posted the issue when it happened last year originally on Stackoverflow on this post, shortly after I inherited the site. At the time the fix was a simple removal of a few lines of code from a .js file, but I never did discover or resolve the vulnerability. The site is built on MODX, which neither I, nor the original builder, have any familiarity with. I've tried to check for security updates from MODX, but updating that software has been a real pain also. Sooo...what's my next step to getting this whole issue resolved? Or steps?

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • Ask the Readers: How Do You Browse Securely Away From Home?

    - by Jason Fitzpatrick
    When you’re browsing away from home, be it on your smartphone, tablet, or laptop, how do you keep your browsing sessions secure? This week we’re interested in hearing all about your mobile security tips and tricks. When you’re out and about you often, out of necessity or convenience, need to connect to open Wi-Fi hotspots and otherwise put your data out there in ways that you don’t when you’re at home. This week we want to hear about your tips, tricks, and applications for keeping your data secure and private when you’re away from your home network. Sound off in the comments with your tips and then check back on Friday for the What You Said roundup. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • What are the downsides of leaving automation tags in production code?

    - by joshin4colours
    I've been setting up debug tags for automated testing of a GWT-based web application. This involves turning on custom debug id tags/attributes for elements in the source of the app. It's a non-trivial task, particularly for larger, more complex web applications. Recently there's been some discussion of whether enabling such debug ids is a good idea to do across the board. Currently the debug ids are only turned on in development and testing servers, not in production. There have been points raised that enabling debug ids does cause performance to take a hit, and that debug ids in production may lead to security issues. What are benefits of doing this? Are there any significant risks for turning on debug tags in production code?

    Read the article

  • How does one block unsupported web browsers?

    - by Sn3akyP3t3
    Web browsers with an end of life no longer receive security updates which not only makes them vulnerable to the end user, but I imagine its not safe for the server's which receive visits by them either. Is it practical to block or enforce and notify the end user that their browser is unsafe and unsupported? If so, how would one achieve that? I don't know of any official or crowd-sourced listing with that information to parse and keep up to date. I'm aware that the practice can be custom built with User Agent parsing and feature detection for HTML5 enabled browsers.

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • Apt-get take long time to update\upgrade

    - by ShockwaveNN
    On my work network any apt-get (or aptitude) commands take a very long time, it's look's like admins blocked some port for it (for unknown reason). For example sudo apt-get update take like 2 days and all I get - a very long list of responses like Get: 36 http://security.ubuntu.com precise-security/universe amd64 Packages [11.6 kB] Get: 37 http://security.ubuntu.com precise-security/universe amd64 Packages [11.6 kB] Get: 38 http://security.ubuntu.com precise-security/universe amd64 Packages [11.6 kB] Get: 39 http://security.ubuntu.com precise-security/universe amd64 Packages [11.6 kB] Get: 40 http://security.ubuntu.com precise-security/universe amd64 Packages [11.6 kB] Same situation then I try to download software Get:1 http://archive.ubuntu.com/ubuntu/ precise/main dash i386 0.5.7-2ubuntu2 [85.8 kB] Get:2 http://archive.ubuntu.com/ubuntu/ precise/main dash i386 0.5.7-2ubuntu2 [85.8 kB] Get:3 http://archive.ubuntu.com/ubuntu/ precise/main dash i386 0.5.7-2ubuntu2 [85.8 kB] Get:4 http://archive.ubuntu.com/ubuntu/ precise/main dash i386 0.5.7-2ubuntu2 [85.8 kB] Get:5 http://archive.ubuntu.com/ubuntu/ precise/main dash i386 0.5.7-2ubuntu2 [85.8 kB] Is there something I can do to change port for apt-get or something else

    Read the article

  • Are programming languages perfect?

    - by mohabitar
    I'm not sure if I'm being naive, as I'm still a student, but a curious question came to my mind. In another thread here, a user stated that in order to protect against piracy of your software, you must have perfect software. So is it possible to have perfect software? This is an extremely silly hypothetical situation, but if you were to gather the most talented and gifted programmers in the world and have them spend years trying to create 'perfect' software, could they be successful? Could it be that not a single exploitable bug could be created? Or are there flaws in programming languages that can still, no matter how hard you try, cause bugs that allow your program to be hijacked? As you can tell, I know nothing about security, but essentially what I'm asking is: is the reason why software is easily exploitable the fact that imperfect human beings create it, or that imperfect programming languages are being used?

    Read the article

  • Why is this by passing the SUDO password?

    - by John Isaacks
    I have a bash script I am using to automate a SVN checkout. The contents of the file were: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ Then when I double click the file it would ask me what to do, I would click the button "Run In Terminal" and then a terminal would pop up and ask me for the SUDO password. I would enter it, the script would execute and the terminal would close. I wanted to give some sort of indication that the script ran successfully so I edited my file to look like: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ echo "Head revision has been pushed to live server" I expected the terminal to now stay open and tell me the message afterwards. To my surprise it now opens and immediately closes. The script does execute and I no longer have to put in the SUDO password. Is this right? I do not understand why this is happening, seems like a security issue.

    Read the article

  • Where I missed boot.properties.?

    - by Dyade, Shailesh M
    Today one of my customer was trying to start the WebLogic Server ( Production Instance) , though he was trying to start the server in a standard way, but it was failing due to below error :   ####<Oct 22, 2012 12:14:43 PM BST> <Warning> <Security> <BanifB1> <> <main> <> <> <> <1350904483998> <BEA-090066> <Problem handling boot identity. The following exception was generated: weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed> And it started failing into below causes. ####<Oct 22, 2012 12:16:45 PM BST> <Critical> <WebLogicServer> <BanifB1> <AdminServer> <main> <<WLS Kernel>> <> <> <1350904605837> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: java.lang.reflect.InvocationTargetException java.lang.AssertionError: java.lang.reflect.InvocationTargetException weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed at weblogic.security.internal.encryption.JSafeSecretKeyEncryptor.decryptSecretKey(JSafeSecretKeyEncryptor.java:121) Customer was facing this issue without any changes in the system, it was stable suddenly started seeing this issue last night. When we checked, customer was manually entering the username and password, config.xml had the entries encrypted However when verified, customer had the boot.properties at the Servers/AdminServer/security folder and DomainName/security didn't have this file. Adding boot.properies fixed the issue. Regards Shailesh Dyade 

    Read the article

  • How to learn PHP effectively?

    - by Goma
    A dozen of bad tutorials out there that teach you bad habits especially when we speak about PHP. I want to learn how to avoid the things that can lead me to develop inefficient web applications. I like to learn from videos but most videos I've found on the internet are provided by people who do not follow good practices. My second option is to learn from books but I did not find a good book for starters in PHP! It would be very helpful for me if you can tell me about your story in learning PHP, what are things that I should avoid? How to learn about PHP security from the beginning to avoid unlearn something later on?. Please provide links to books, websites that provide high quality video tutorials for PHP, and you tips for a good start!

    Read the article

  • Opensource package for securly allowing users to log in and provide information

    - by JTS
    I have a site written in mostly php and html. I also have a sql database of personal information like names and addresses. I would like my users to be able to log in to my website with a login I can email or snail mail to them, and view and edit their information on my database. Users can currently enter information online I and store it in my database but they can't view or edit stored information. I can add the code to do this, but when I give users the ability to view information I suddenly have a lot more security concerns. Is there an open source package to deal with allowing users to do something like this? Or is there an established convention for this? I know this is a pretty basic question, and there might be some good literature about it that I have yet to find, so if someone can just point me in the direction of some of that information, or better yet give me firsthand some information about this that would be great.

    Read the article

  • Is there any good reason I would want my website to be framed?

    - by minitech
    I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply: X-Frame-Options: Deny to every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.

    Read the article

  • Why don't smart phones have an auto-forget password feature? [closed]

    - by Kelvin
    Storing passwords to external services (e.g. corporate email servers) on smart phones is very insecure, since phones are more easily stolen. Has any vendor implemented a feature to only cache a password in memory for a limited amount of time? After the time period has elapsed, the app would ask for the password again. EDIT: I should've clarified - I'm aware that many (most?) users are lazy and want to just "set it and forget it". The always-remember feature will probably always be present. I was curious about an option to enable auto-forget for the security-conscious.

    Read the article

  • Is the escaping provided by the Google-Gson library enough to ensure a safe JSON payload?

    - by Lifetime_Learner
    I am currently using the Google-Gson library to convert Java objects into JSON inside a web service. Once the object has been converted to JSON, it is returned to the client to be converted into a JSON object using the JavaScript eval() function. Is the character escaping provided by the Gson library enough to ensure that nothing nasty will happen when I run the eval() function on the JSON payload? Do I need to HTML Encode the Strings in the Java Objects before passing them to the Gson library? Are there any other security concerns that I should be aware of?

    Read the article

  • How to get rid of crawling errors due to the URL Encoded Slashes (%2F) problem in Apache

    - by user14198
    The Google web crawler has indexed a whole set of URLs with encoded slashes (%2F) for our site. I assume it has picked up the pages from our XML sitemap file. The problem is that the live pages will actually result in a failure because of the Url Encoded Slashes Problem in Apache. Some solutions are mentioned here We are implementing a 301 redirect scheme for all the error pages. This should make the Google bot delete the pages from the crawling errors (no more crashing pages). Does implementing the 301s require the pages to be "live"? In that case we may be forced to implement solution 1 in the article. The problem is that solution 1 will pose a security vulnerability..

    Read the article

  • Is this fix for Avast Antivirus crashing safe to use?

    - by TmRn
    Well I have installed avast anti virus on Ubuntu 12.04. But after updating, it crashes! So I have made some tweaks like below: Press Ctrl+Alt+T to open the Terminal. When it opens, run the command below. sudo gedit /etc/init.d/rcS Type your password and hit Enter. When the text file opens, add the line: sysctl -w kernel.shmmax=128000000 Make sure the line you added is before: exec /etc/init.d/rc S This is what it should look like: #! /bin/sh # rcS # # Call all S??* scripts in /etc/rcS.d/ in numerical/alphabetical order # sysctl -w kernel.shmmax=128000000 exec /etc/init.d/rc S Save the file. Reboot. My question is: Did I do anything wrong? I mean as I have made some tweaks, will it lower the security of Avast down like viruses do? Please if you are a programmer check this if it contains bug or harmful intentions... Thanks.

    Read the article

  • Drive By Download Issue

    - by mprototype
    I'm getting a drive by download issue reported on www.cottonsandwichquiltshop.com/catalog/index.php?manufacturers_id=19&sort=2a&filterid=61 reported from safeweb.norton.com when I scan the root url. I have dug through the entire site architecture, and code base and removed a few files that were malicious, i upgraded the site's framework and fixed the security holes (mostly sql injection concerns)..... However this one threat still exists and I can't locate it for the life of me, or find any valid research or information on removing this type of threat at the server level, mostly just a bunch of anti-virus software wanting to sell you on their ability to manage it on the client end. PLEASE HELP Thanks.

    Read the article

  • Apache: DoS with mod_deflate & range requests, tomcat also? [migrated]

    - by VextoR
    I know that apache has a security bug http://seclists.org/fulldisclosure/2011/Aug/175 So if you do this command: curl -I -H "Range: bytes=0-1,0-2" -s www.yandex.ru/robots.txt it says HTTP/1.1 206 Partial Content it means, the problem is exist. But the fact is, that for apache tomcat (our server) curl says 206 Partial Content as well. So we need to fix it. I found solution for apache HTTP (.htaccess, mod_headers) but not for tomcat. I'm very newbie for servers things, so can't understand most, so please help

    Read the article

  • Combining a content management system with ASP.NET

    - by Ek0nomik
    I am going to be creating a site that seems like it requires a blend of a content management system (CMS) and some custom web development (which is done in ASP.NET MVC). I have plenty of web development experience to understand the ASP.NET MVC side of the fence, but, I don't have a lot of CMS knowledge aside from getting one stood up. Right now my biggest question is around integrating security from ASP.NET with the CMS. I currently have an ASP.NET MVC site that handles the authentication for multiple production sites and creates an authentication cookie under our domain (*.example.com). The page acts like a single sign on page since the cookie is a wildcard and can be used in any other applications of the same domain. I'd really like to avoid having users put in their credentials twice. Is there a CMS that will play well with the ASP.NET Forms Authentication given how I have these existing applications structured? As an aside, right now I am leaning towards Drupal, but, that isn't finalized.

    Read the article

  • Where can I hire a trustworthy professional PHP programmer?

    - by JJ22
    I wrote a php application for my website that really needs to work well and be as secure as possible. I'm a novice php programmer, so while my application seems to work well, there may be inefficiencies or security vulnerabilities. I feel that I should have someone look over my code before making the application publicly available, but I'm hesitant to just post it online because it handles some rather sensitive things. Where can I find a competent, trustworthy, and relatively inexpensive php programmer who would be willing to review a few thousand lies of well-commented easy-to-read php code? Thank you!

    Read the article

  • Need private personal access to ~three PHP pages

    - by Roger
    I would like secure access to the text output by three PHP scripts (the text output is JavaScript and html) . The security level is much less then financial data but important none-the-less. I have considered purchasing AND studying https and SSL certificates. Hostgator charges an extra $2/month for a private ip plus $50+ anually for a certificate. This is more then I want to spend for this project (time + money). Is there a simpler solution that is: less expensive easier to implement. I'm open to different approaches.

    Read the article

  • What to do about this gnome-keyring message?

    - by arroy_0209
    I upgraded from ubuntu 10.04 to 12.04 and installed lxde. Since then whenever I try to print some file (or use command lpstat), I get this message on the terminal: "WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring-SZ59jJ/pkcs11: No such file or directory". This is beyond my knowledge and from search I only realize that this mey be related to security (as learned from gnome-keyring on wikipedia). I have no idea what to about this warning. Can anybody please suggest? Evidently as stated, I am not using gnome desktop, I choose lxde session at the time of logging in.

    Read the article

  • Access Token Verification

    - by DecafCoder
    I have spent quite a few days reading up on Oauth and token based security measures for REST API's and I am currently looking at implementing an Oauth based authentication approach almost exactly like the one described in this post (OAuth alternative for a 2 party system). From what I understand, the token is to be verified upon each request to the resource server. This means the resource server would need to retrieve the token from a datastore to verify the clients token. Given this would have to happen upon every request I am concerned about the speed implications of hitting a datastore like MySQL or NoSQL upon every request just to verify the token. Is this the standard way to verify tokens by having them stored in a RDBMS or NoSQL database and retrieved upon each request? Or is it a suitable solution to have them cached (baring in mind that we are talking millions of users)?

    Read the article

  • Setting up fastcgi on an Ubunutu server (socket file permissions issue)

    - by gray alien
    I am trying to set up mod_fcgid on my server. Part of the requirement is that Apache needs to create a socket file for mod_fcgid. I specified the folder for Apache to write the socket data to: /var/run/apache2/fcgid I then specified this file in my fcgid.conf file as follows: SocketPath /var/run/apache2/fcgid/sock I then changed the owner of the folder to www-data (the apache user) and gave the owner full permissions to the folder and its contents. I was able to run my test fcgi app then. When I rebooted the machine, y fastcgi app no longer worked. After some investigation, I found that ownership of /var/run/apache2/fcgid has been reset to root, and with permission reset to 700 I have the following questions: Is there something specific about the /var/run folder? why is the permissions being reset after a reboot? Should I move my socket file to another location (in case root automatically takes ownership of contents in this folder for security reasons?) I am running Ubuntu 10.0.4 LTS 64 bit

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >