Search Results

Search found 16593 results on 664 pages for 'adf security deploy'.

Page 81/664 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Windows 7: "Replace All Child Object Permissions" Doesn't Stay Checked

    - by raywood
    I right-click on a top-level folder in Windows Explorer. I choose Properties Security tab Advanced Change Permissions. I check "Replace all child object permissions with inheritable permissions from this object" Apply. I get a Windows Security dialog that says, "Setting security information on" the list of objects that flashes by. But now the "Replace all child object permissions" box is unchecked. What is happening here?

    Read the article

  • When to use an MS SQL instance vs. different database on same instance

    - by BoxerBucks
    We have some MS SQL servers that are setup with different instances on the same server to separate applciation DB's as well as some servers that are setup with all DB's on the same instance, just separated with security settings. When is it advisable to create a new instance for SQL server and install your DB's in that instance as opposed to just creating a new DB on the same instance and putting security around the database itself? Is there more to the decision that just a security aspect?

    Read the article

  • How would I template an SLS using saltstack

    - by user180041
    I'm trying to do proof of concept with Mongodb(sharding) and Id like to run a command every time I spin up a new cluster without having to add lines in all my sls files. My current init is as follows: mongo Replica4:27000 /usr/lib/mongo/init_addshard.js: cmd: - run - user: present The word Replica4 is not templated id like to know a way I would be able to do so, that way when I spin up a new cluster I wouldn't have to touch anything in this file.

    Read the article

  • Allow members of a group to be unlocked by a specific account on AD

    - by JohnLBevan
    Background I'm creating a service to allow support staff to enable their firecall accounts out of hours (i.e. if there's an issue in the night and we can't get hold of someone with admin rights, another member of the support team can enable their personal firecall account on AD, which has previously been setup with admin rights). This service also logs a reason for the change, alerts key people, and a bunch of other bits to ensure that this change of access is audited / so we can ensure these temporary admin rights are used in the proper way. To do this I need the service account which my service runs under to have permissions to enable users on active directory. Ideally I'd like to lock this down so that the service account can only enable/disable users in a particular AD security group. Question How do you grant access to an account to enable/disable users who are members of a particular security group in AD? Backup Question If it's not possible to do this by security group, is there a suitable alternative? i.e. could it be done by OU, or would it be best to write a script to loop through all members of the security group and update the permissions on the objects (firecall accounts) themselves? Thanks in advance. Additional Tags (I don't yet have access to create new tags here, so listing below to help with keyword searches until it can be tagged & this bit editted/removed) DSACLS, DSACLS.EXE, FIRECALL, ACCOUNT, SECURITY-GROUP

    Read the article

  • All HTTPS, or is it OK to accept HTTP and redirect (secure vs. user friendly)

    - by tharrison
    Our site currently redirects requests sent to http://example.com to https://example.com -- everything beyond this is served over SSL. For now, the redirect is done with an Apache rewrite rule. Our site is dealing with money, however, so security is pretty important. Does allowing HTTP in this way pose any greater security risk than just not opening or listening on port 80? Ideally, it's a little more user-friendly to redirect. (I am aware that SSL is only one of a large set of security considerations, so please make the generous assumption that we have done at least a "very good" job of covering various security bases.)

    Read the article

  • Vlad the deployer on Dreamhost - initial script

    - by xmariachi
    Hi, I'm trying to deploy an app with SVN and Vlad the deployer. Vlad and its dependencies are installed and seem OK. I'm trying the following: rake prod vlad:update Being my config/deploy.rb file: task :prod do set :application, "xxx" set :deploy_timestamped, "false" set :user, "username" set :scm_user, "scmusername" set :repository, "http://domain.com/svn/app" set :domain, "domain.com" set :deploy_to, "/home/username/deployments/app" puts "Production deployment to #{deploy_to}" end I have done "rake prod vlad:setup" already, that's fine. But when calling "rake prod vlad:update", I get the following A ...file Exported revision 14. ln: creating symbolic link `/home/username/deployments/drupalgestalt/releases/20100503164225/public/system' to `/home/username/deployments/drupalgestalt/shared/system': No such file or directory rake aborted! execution failed with status 1: ssh domain.com ln -s /home/username/deployments/app/shared/log /home/username/deployments/app/releases/20100503164225/log && ln -s /home/username/deployments/app/shared/system /home/username/deployments/app/releases/20100503164225/public/system && ln -s /home/username/deployments/app/shared/pids /home/username/deployments/app/releases/20100503164225/tmp/pids Apparently it complains when creating the ln, but permissions are all set up fine. Am I doing anything wrong? I'm just starting with Vlad on the assumption it was super-easy to set up. Had played a bit with cap in the past, and I do like Vlad idea.

    Read the article

  • vs10 not deploying all required files - then not over-writing updated files

    - by justSteve
    I'm in the habit of deploying to alternating folders (/inetpub/wwwroot/mySite & /inetpub/wwwroot/mySite2) so if something unexpected happens with the deploy i can quickly swap back to a previous version just by changing the path in IIS So i was deploying an MVC2 webapp to a empty folder figuring that VS would send up all the files it needs. Not even close. Initially, it didn't even upload a couple required nHibernate.dlls. Later, after manually copying files referenced in the thrown exceptions, i just copied all the files from the previous compile and then re-published over the top expecting VS to over-write the changed files. Failed that too. No reports of errors by VS....just failed to over-write a number of pre-existing (but changed/updated) files. Hard to believe these kinds of errors (and lack of feedback that errors were encountered) in a state of the art tool like VS. Clearly, I'm doing something wrong. I'm using VisualSVN for source control and connect to my colocated server via a VPN-based mapped network drive (so I can use FileSystem to publish). (both of which can complicate file properties) VS08 had more choices for which files it would send up - i found i needed to use the 'All files in source' on an initial deployment, the 'Replace Matching'. If I choose 'delete all existing...' I'd be back to square 1 and have to deploy with the 'All files in source project folder'. But VS10 doesn't have the 'All files in source project folder. I ended up manually copying the files - which seems not right in the extreme. Are these known issues others have to deal with? What's best practice for deploying a web-app? thx

    Read the article

  • Configure Django project in a subdirectory using mod_python. Admin not working.

    - by David
    HI guys. I was trying to configure my django project in a subdirectory of the root, but didn't get things working.(LOcally it works perfect). I followed the django official django documentarion to deploy a project with mod_python. The real problem is that I am getting "Page not found" errors, whenever I try to go to the admin or any view of my apps. Here is my python.conf file located in /etc/httpd/conf.d/ in Fedora 7 LoadModule python_module modules/mod_python.so SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE mysite.settings PythonOption django.root /mysite PythonDebug On PythonPath "['/var/www/vhosts/mysite.com/httpdocs','/var/www/vhosts/mysite.com/httpdocs/mysite'] + sys.path" I know /var/www/ is not the best place to put my django project, but I just want to send a demo of my work in progress to my customer, later I will change the location. For example. If I go to www.domain.com/mysite/ I get the index view I configured in mysite.urls. But I cannot access to my app.urls (www.domain.com/mysite/app/) and any of the admin.urls.(www.domain.com/mysite/admin/) Here is mysite.urls: urlpatterns = patterns('', url(r'^admin/password_reset/$', 'django.contrib.auth.views.password_reset', name='password_reset'), (r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done'), (r'^reset/(?P<uidb36>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), (r'^reset/done/$', 'django.contrib.auth.views.password_reset_complete'), (r'^$', 'app.views.index'), (r'^admin/', include(admin.site.urls)), (r'^app/', include('mysite.app.urls')), (r'^photologue/', include('photologue.urls')), ) I also tried changing admin.site.urls with ''django.contrib.admin.urls' , but it didn't worked. I googled a lot to solve this problem and read how other developers configure their django project, but didn't find too much information to deploy django in a subdirectory. I have the admin enabled in INSTALLED_APPS and the settings.py is ok. Please if you have any guide or telling me what I am doing wrong it will be much appreciated. THanks.

    Read the article

  • Are there any security vulnerabilities in this PHP code?

    - by skorned
    Hi. I just got a site to manage, but am not too sure about the code the previous guy wrote. I'm pasting the login procedure below, could you have a look and tell me if there are any security vulnerabilities? At first glance, it seems like one could get in through SQL injection or manipulating cookies and the ?m= parameter. define ( 'CURRENT_TIME', time ()); / / Current time. define ( 'ONLINE_TIME_MIN', (CURRENT_TIME - BOTNET_TIMEOUT)); / / Minimum time for the status of "Online". define ( 'DEFAULT_LANGUAGE', 'en'); / / Default language. define ( 'THEME_PATH', 'theme'); / / folder for the theme. / / HTTP requests. define ( 'QUERY_SCRIPT', basename ($ _SERVER [ 'PHP_SELF'])); define ( 'QUERY_SCRIPT_HTML', QUERY_SCRIPT); define ( 'QUERY_VAR_MODULE', 'm'); / / variable contains the current module. define ( 'QUERY_STRING_BLANK', QUERY_SCRIPT. '? m ='); / / An empty query string. define ( 'QUERY_STRING_BLANK_HTML', QUERY_SCRIPT_HTML. '? m ='); / / Empty query string in HTML. define ( 'CP_HTTP_ROOT', str_replace ( '\ \', '/', (! empty ($ _SERVER [ 'SCRIPT_NAME'])? dirname ($ _SERVER [ 'SCRIPT_NAME']):'/'))); / / root of CP. / / The session cookie. define ( 'COOKIE_USER', 'p'); / / Username in the cookies. define ( 'COOKIE_PASS', 'u'); / / user password in the cookies. define ( 'COOKIE_LIVETIME', CURRENT_TIME + 2592000) / / Lifetime cookies. define ( 'COOKIE_SESSION', 'ref'); / / variable to store the session. define ( 'SESSION_LIVETIME', CURRENT_TIME + 1300) / / Lifetime of the session. ////////////////////////////////////////////////// ///////////////////////////// / / Initialize. ////////////////////////////////////////////////// ///////////////////////////// / / Connect to the database. if (! ConnectToDB ()) die (mysql_error_ex ()); / / Connecting topic. require_once (THEME_PATH. '/ index.php'); / / Manage login. if (! empty ($ _GET [QUERY_VAR_MODULE])) ( / / Login form. if (strcmp ($ _GET [QUERY_VAR_MODULE], 'login') === 0) ( UnlockSessionAndDestroyAllCokies (); if (isset ($ _POST [ 'user']) & & isset ($ _POST [ 'pass'])) ( $ user = $ _POST [ 'user']; $ pass = md5 ($ _POST [ 'pass']); / / Check login. if (@ mysql_query ( "SELECT id FROM cp_users WHERE name = '". addslashes ($ user). "' AND pass = '". addslashes ($ pass). "' AND flag_enabled = '1 'LIMIT 1") & & @ mysql_affected_rows () == 1) ( if (isset ($ _POST [ 'remember']) & & $ _POST [ 'remember'] == 1) ( setcookie (COOKIE_USER, md5 ($ user), COOKIE_LIVETIME, CP_HTTP_ROOT); setcookie (COOKIE_PASS, $ pass, COOKIE_LIVETIME, CP_HTTP_ROOT); ) LockSession (); $ _SESSION [ 'Name'] = $ user; $ _SESSION [ 'Pass'] = $ pass; / / UnlockSession (); header ( 'Location:'. QUERY_STRING_BLANK. 'home'); ) else ShowLoginForm (true); die (); ) ShowLoginForm (false); die (); ) / / Output if (strcmp ($ _GET [ 'm'], 'logout') === 0) ( UnlockSessionAndDestroyAllCokies (); header ( 'Location:'. QUERY_STRING_BLANK. 'login'); die (); ) ) ////////////////////////////////////////////////// ///////////////////////////// / / Check the login data. ////////////////////////////////////////////////// ///////////////////////////// $ logined = 0, / / flag means, we zalogininy. / / Log in session. LockSession (); if (! empty ($ _SESSION [ 'name']) & &! empty ($ _SESSION [ 'pass'])) ( if (($ r = @ mysql_query ( "SELECT * FROM cp_users WHERE name = '". addslashes ($ _SESSION [' name'])."' AND pass = ' ". addslashes ($ _SESSION [' pass']). " 'AND flag_enabled = '1' LIMIT 1 ")))$ logined = @ mysql_affected_rows (); ) / / Login through cookies. if ($ logined! == 1 & &! empty ($ _COOKIE [COOKIE_USER]) & &! empty ($ _COOKIE [COOKIE_PASS])) ( if (($ r = @ mysql_query ( "SELECT * FROM cp_users WHERE MD5 (name )='". addslashes ($ _COOKIE [COOKIE_USER ])."' AND pass = '". addslashes ($ _COOKIE [COOKIE_PASS]). " 'AND flag_enabled = '1' LIMIT 1 ")))$ logined = @ mysql_affected_rows (); ) / / Unable to login. if ($ logined! == 1) ( UnlockSessionAndDestroyAllCokies (); header ( 'Location:'. QUERY_STRING_BLANK. 'login'); die (); ) / / Get the user data. $ _USER_DATA = @ Mysql_fetch_assoc ($ r); if ($ _USER_DATA === false) die (mysql_error_ex ()); $ _SESSION [ 'Name'] = $ _USER_DATA [ 'name']; $ _SESSION [ 'Pass'] = $ _USER_DATA [ 'pass']; / / Connecting language. if (@ strlen ($ _USER_DATA [ 'language'])! = 2 | |! SafePath ($ _USER_DATA [ 'language']) | |! file_exists ( 'system / lng .'.$_ USER_DATA [' language '].' . php'))$_ USER_DATA [ 'language'] = DEFAULT_LANGUAGE; require_once ( 'system / lng .'.$_ USER_DATA [' language'].'. php '); UnlockSession ();

    Read the article

  • lucid 10.04 LTS => Precise 12.04.1 : upgrade doesn't work

    - by Rastom
    I googled and looked into all unkown issues on ubuntu forums but I can't figure out why a 10.04 LTS server won't detect the last LTS 12.04.1. I guess since 12.04 is a fresh dist, not much is reported for related issues Here is what I did : apt-get update apt-get upgrade apt-get install update-manager-core it was already installed so no update for this package. I checked : /etc/update-manager/release-upgrades [DEFAULT] # Default prompting behavior, valid options: # # never - Never check for a new release. # normal - Check to see if a new release is available. If more than one new # release is found, the release upgrader will attempt to upgrade to # the release that immediately succeeds the currently-running # release. # lts - Check to see if a new LTS release is available. The upgrader # will attempt to upgrade to the first LTS release available after # the currently-running one. Note that this option should not be # used if the currently-running release is not itself an LTS # release, since in that case the upgrader won't be able to # determine if a newer release is available. Prompt=lts I also checked my sourcelist before running apt-get : /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe deb-src http://security.ubuntu.com/ubuntu lucid-security universe deb http://security.ubuntu.com/ubuntu lucid-security multiverse deb-src http://security.ubuntu.com/ubuntu lucid-security multiverse # deb http://landscape.canonical.com/packages/hardy ./ # deb-src http://landscape.canonical.com/packages/hardy ./ and then following Ubuntu guide for Precise upgrade the command below should work : root@xxxxxxxxx:/etc/apt# do-release-upgrade -d Checking for a new ubuntu release No new release found So am I missing something ? The server was accessing outside through a proxy but I grant direct access to this server to avoid any Internet access problem or redirection but no clue... Any help would be appreciated

    Read the article

  • Windows XP - Security Update for Windows XP (KB923561) (KB946648) (KB956572) (KB958644)

    - by leeand00
    My father's computer has Windows XP, but when I try to install the service packs it always fails. What gives? Here are the errors that I get in the event log: Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB946648). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 38 33 44 31 41 44 ={83D1AD 0028: 46 35 2d 37 37 39 44 2d F5-779D- 0030: 34 30 31 36 2d 38 43 33 4016-8C3 0038: 31 2d 35 34 39 32 37 30 1-549270 0040: 46 36 37 42 33 46 7d 20 F67B3F} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 34 20 00 04 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Catagory: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB956572). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 44 46 32 46 30 41 ={DF2F0A 0028: 39 38 2d 36 45 33 35 2d 98-6E35- 0030: 34 33 37 39 2d 41 42 33 4379-AB3 0038: 33 2d 41 30 33 30 33 45 3-A0303E 0040: 46 37 34 42 32 41 7d 20 F74B2A} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 32 20 00 02 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer EVO Source: Windows Update Agent Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB958644). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 39 33 39 37 41 32 ={9397A2 0028: 31 46 2d 32 34 36 43 2d 1F-246C- 0030: 34 35 33 42 2d 41 43 30 453B-AC0 0038: 35 2d 36 35 42 46 34 46 5-65BF4F 0040: 43 36 42 36 38 42 7d 20 C6B68B} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 31 20 00 01 . Date: 2/6/2010 Time: 12:02:18 AM Type: Error User: N/A Computer: EVO Source: Windows Update Agent Category: Installation Event ID: 20 Installation Failure: Windows failed to install the following update with error 0x80070002: Security Update for Windows XP (KB923561). For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. 0000: 57 69 6e 33 32 48 52 65 Win32HRe 0008: 73 75 6c 74 3d 30 78 38 sult=0x8 0010: 30 30 37 30 30 30 32 20 0070002 0018: 55 70 64 61 74 65 49 44 UpdateID 0020: 3d 7b 33 31 30 41 34 43 ={310A4C 0028: 30 38 2d 35 39 33 44 2d 08-593D- 0030: 34 31 41 33 2d 42 42 35 41A3-BB5 0038: 37 2d 38 33 42 33 38 36 7-83B386 0040: 44 37 37 33 42 35 7d 20 D773B5} 0048: 52 65 76 69 73 69 6f 6e Revision 0050: 4e 75 6d 62 65 72 3d 31 Number=1 0058: 30 33 20 00 03 . Thank you, Andrew

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle Database Insider Now on LinkedIn

    - by Troy Kitch
    Our close friends over at the Oracle Database Insider blog have recently started a LinkedIn discussion group. Go behind the scenes of the latest Oracle Database announcements and discussions that include Oracle Database 11g and its options, such as Database Security, and the newest product, Oracle Exadata. Come on over to post a discussion topic, an event, ask questions and stay up-to-date on the latest Oracle Database information. We'll be there to join the discussions and answer questions. Join us on LinkedIn's latest group!

    Read the article

  • How can I audit users and access attempts to SSH on my server?

    - by RadiantHex
    I've had a few security problems with a server of mine, a few SSH users have been setting up fires aka giving problems. I would like to: Track user logins and logouts Track activity of these SSH, in order to discover any malicious activity Prevent users from deleting logs I am not much of a sys admin and I am quite inexperienced in this matter, so any kind of advice would be very welcome and very helpful. :)

    Read the article

  • No Rest for the Virtuous

    - by Chris Massey
    It has been an impressively brutal month in terms of security breaches, and across a whole range of fronts. The "Cablegate" leaks, courtesy of Wikileaks, appear to be in a league of their own. The "Operation Payback" DDoS attacks against PayPal, MasterCard and Visa (not to mention the less successful attack against Amazon) are equally impressive. Even more recently, the Gawker Media Network was subjected to a relatively sophisticated hack attack by Gnosis, with the hackers gaining access to some...(read more)

    Read the article

  • SQL Azure and Trust Services

    - by BuckWoody
    Microsoft is working on a new Windows Azure service called “Trust Services”. Trust Services takes a certificate you upload and uses it to encrypt and decrypt sensitive data in the cloud. Of course, like any security service, there’s a bit more to it than that. I’ll give you a quick overview of how you can use this product to protect data you send to SQL Azure. The primary issue with storing data in the cloud is that you are in an environment that isn’t under your control – in fact, that’s the benefit of being in a distributed computing environment in the first place. On premises you’re able to encrypt data you don’t want anyone else to see, using various methods such as passwords (not very strong) or certificates (stronger). When you use a certificate, it’s vital that you create (or procure) and protect it yourself. When you store data remotely, regardless of IaaS, PaaS or SaaS, you don’t own the machines where the data lives. That means if you use a certificate from the cloud vendor to encrypt the data, you have to trust that the data won’t be accessed by the vendor. In some cases having a signed agreement with the vendor that they won’t access your data is sufficient, in other cases that doesn’t meet the requirements your system has for security. With the new Trust Services service, the basic process is that you use a Portal to create a Trust Server using policies and other controls. You place a X.509 Certificate you create or procure in that server. Using the Software development Kit (SDK), the developer has access to an Application Layer Encryption Framework to set fields of data they want to encrypt. From there, the data can be stored in SQL Azure as a standard field – only it is encrypted before it ever arrives. The portion of the client software that decrypts the data uses the same service, so the authenticated user sees the data if they are allowed to do so. The data remains encrypted “at rest”.  You can learn more about this product and check it out in the SQL Azure labs at Microsoft Codename "Trust Services"

    Read the article

  • Interesting links week #51 and #52

    - by erwin21
    Below a list of interesting links that I found this week: Frontend: How to Create a Mobile Version of Your Website 10 tricks that will make your jQuery enabled site go faster Tools and Resources to Test Cross Browser Compatibility of Your Websites 9 Websites to Learn the Basics About html 5 Development: Online web.config security analyzer tool Using 51Degrees.Mobi Foundation for accurate mobile browser detection on ASP.NET MVC 3 Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Comodo Cleaning Essentials for Windows

    Comodo Cleaning Essentials' main purpose is to clean an infected PC. Comodo emphasizes the fact that cleaning an infected PC and protecting a clean PC from potential attacks are two completely separate items. While Comodo Cleaning Essentials specializes in the former, the company does have a preventative solution in the form of its Comodo Internet Security offering, which employs auto sandbox technology to provide ultimate protection. Comodo Cleaning Essentials is highlighted by its two core technologies: KillSwitch and Malware Scanner. KillSwitch operates off of Comodo's whitelist database...

    Read the article

  • Managing accounts on a private website for a real-life community

    - by Smudge
    I'm looking at setting-up a walled-in website for a real-life community of people, and I was wondering if anyone has any experience with managing member accounts for this kind of thing. Some conditions that must be met: This community has a set list of real-life members, each of whom would be eligible for one account on the website. We don't expect or require that they all sign-up. It is purely opt-in, but we anticipate that many of them would be interested in the services we are setting up. Some of the community members emails are known, but some of them have fallen off the grid over the years, so ideally there would be a way for them to get back in touch with us through the public-facing side of the site. (And we'd want to manually verify the identity of anyone who does so). Their names are known, and for similar projects in the past we have assigned usernames derived from their real-life names. This time, however, we are open to other approaches, such as letting them specify their own username or getting rid of usernames entirely. The specific web technology we will use (e.g. Drupal, Joomla, etc) is not really our concern right now -- I am more interested in how this can be approached in the abstract. Our database already includes the full member roster, so we can email many of them generated links to a page where they can create an account. (And internally we can require that these accounts be paired with a known member). Should we have them specify their own usernames, or are we fine letting them use their registered email address to log-in? Are there any paradigms for walled-in community portals that help address security issues if, for example, one of their email accounts is compromised? We don't anticipate attempted break-ins being much of a threat, because nothing about this community is high-profile, but we do want to address security concerns. In addition, we want to make the sign-up process as painless for the members as possible, especially given the fact that we can't just make sign-ups open to anyone. I'm interested to hear your thoughts and suggestions! Thanks!

    Read the article

  • Using Dynamic LINQ to get a filter for my Web API

    - by Espo
    We are considering using the Dynamic.CS linq-sample included in the "Samples" directory of visual studio 2008 for our WebAPI project to allow clients to query our data. The interface would be something like this (In addition to the normal GET-methods): public HttpResponseMessage List(string filter = null); The plan is to use the dynamic library to parse the "filter"-variable and then execute the query agains the DB. Any thoughts if this is a good idea? Is it a security problem?

    Read the article

  • How can I tell if ZRTP is enabled in a Twinkle SIP call?

    - by komputes
    I recently attended a talk about GNU Telephony. I was informed that Twinkle supports ZRTP for encrypted SIP calls. I went into Edit User Profile Security and made sure that ZRTP was enables and that all boxes were checked. I asked a friend to do the same and then we called each other. There is no immediate indication that I can see that the call is secure. How can I tell if ZRTP is enabled in a Twinkle SIP call?

    Read the article

  • Windows Media Player Vulnerability, PCAnywhere Warning

    Windows Media Player Vulnerability Targeted by Drive-by-download Attack Security firm Trend Micro recently released details on malware that has been targeting the MIDI Remote Code Execution Vulnerability found in Microsoft's Windows Media Player. A post on Trend Micro's Malware Blog offered further insight into the malware that has been exploiting the CVE-2012-0003 vulnerability. The malware's authors have been successful in exploiting the vulnerability by tricking unsuspecting victims into opening a specially engineered MIDI file in Windows Media Player. This Web-based drive-by-download ...

    Read the article

  • What is the Best Practice for creating a secure login in a client - server appllication?

    - by Karamafrooz
    It's been a while I have been thinking on what could be the best scenario for creating a secure login in a client-server application running on internet or any other networks ! So I became with the idea to ask this question on programmers and I hope that this question will make awareness of new aspects of threads and security here by some kind of brain storming , I am really interested in good and new anseawres . Thanks in advance for your participation .

    Read the article

  • How to configure multiple WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >