Search Results

Search found 32953 results on 1319 pages for 'access secret'.

Page 286/1319 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • login connection problem using SimpleTest

    - by Cedric
    Hi everyone. I am using SimpleBrowser from SimpleTest (http://www.simpletest.org) to login a webmin (http://www.webmin.com/). This login uses https. I've tried two different ways, both fail. $browser = new SimpleBrowser(); $browser->useCookies(); $browser->useFrames(); //echoes the login page, where it should echo the landing page from a logged user echo $browser->post('https://address/','user=User&pass=Secret')); And also : $browser = new SimpleBrowser(); $browser->useCookies(); $browser->useFrames(); $browser->get('https://address/'); $browser->setField('user', 'User'); $browser->setField('pass', 'Secret'); //echoes the login page, where it should echo the landing page from a logged user echo $browser->clickSubmit('Login'); Do you have any clue why it doesn't work ?

    Read the article

  • The underlying connection was closed: An unexpected error occurred on a receive

    - by Dave
    Using a console app which runs as a scheduled task on Azure, to all a long(ish) running webpage on Azure. Problem after a few minutes: The underlying connection was closed: An unexpected error occurred on a receive //// used as WebClient doesn't support timeout public class ExtendedWebClient : WebClient { public int Timeout { get; set; } protected override WebRequest GetWebRequest(Uri address) { HttpWebRequest request = (HttpWebRequest)base.GetWebRequest(address); if (request != null) request.Timeout = Timeout; request.KeepAlive = false; request.ProtocolVersion = HttpVersion.Version10; return request; } public ExtendedWebClient() { Timeout = 1000000; // in ms.. the standard is 100,000 } } class Program { static void Main(string[] args) { var taskUrl = "http://www.secret.com/secret.aspx"; // create a webclient and issue an HTTP get to our url using (ExtendedWebClient httpRequest = new ExtendedWebClient()) { var output = httpRequest.DownloadString(taskUrl); } } }

    Read the article

  • Try to fill the GAE datastore but the code consumes to much cpu time. How to optimize this?

    - by Neverland
    I try to get the list of images in Amazon EC2 inside the Google datastore. I want to realize this with a cron job inside the GAE. class AmazonEC2uswest(db.Model): ami = db.StringProperty(required=True) mani = db.StringProperty() typ = db.StringProperty() arch = db.StringProperty() state = db.StringProperty() owner = db.StringProperty() class CronAMIsAmazonUS_WEST(webapp.RequestHandler): def get(self): aws_access_key_id_admin = "<secret>" aws_secret_access_key_admin = "<secret>" conn_us_west = boto.ec2.connect_to_region('us-west-1', aws_access_key_id=aws_access_key_id_admin, aws_secret_access_key=aws_secret_access_key_admin, is_secure = False) liste_images_us_west = conn_us_west.get_all_images() laenge_liste_images_us_west = len(liste_images_us_west) for i in range(laenge_liste_images_us_west): datastore_uswest_AMIs = AmazonEC2uswest(ami=liste_images_us_west[i].id, mani=str(liste_images_us_west[i].location), typ=liste_images_us_west[i].type, arch=liste_images_us_west[i].architecture, state=liste_images_us_west[i].state, owner=liste_images_us_west[i].ownerId) datastore_uswest_AMIs.put() The problem: Getting the list with get_all_images() lasts only a few seconds. But writing the data to the Google datastore needs way too much CPU time. My IBM T42p (P4M with 2GHz) needs for that piece of code approx. 1 Minute! Is it possible to optimize my code in a way that it needs fewer CPU time?

    Read the article

  • How to disable secret_token in Rails 3?

    - by Damian Nowak
    I have several separate Rails 2 applications which share the same cookie. I upgraded one the applications to Rails 3.2.15 now. Mandatory secret_token in Rails 3 makes it impossible to share the session with the Rails 2 apps. I am storing the session in Redis. What the visitor only gets in the cookie is a session ID. There's no need to encrypt it. Therefore, how to disable secret_token in Rails 3? A secret is required to generate an integrity hash for cookie session data. Use config.secret_token = "some secret phrase of at least 30 characters"in config/initializers/secret_token.rb

    Read the article

  • The Definitive Guide To Website Authentication (beta)

    - by Michiel de Mare
    Form Based Authentication For Websites Please help us create the definitive resource for this topic. We believe that stackoverflow should not just be a resource for very specific technical questions, but also for general guidelines on how to solve variations on common problems. "Form Based Authentication For Websites" should be a fine topic for such an experiment. It should include topics such as: how to log in how to remain logged in how to store passwords using secret questions forgotten password functionality OpenID "Remember me" checkbox Browser autocompletion of usernames and passwords secret urls (public urls protected by digest) checking password strength email validation and much more It should not include things like: roles and authorization http basic authentication Please help us by Suggesting subtopics Submitting good articles about this subject Editing the official answer (as soon as you have enough karma) UPDATE: See the terrific 7-part series by Jens Roland below.

    Read the article

  • Hash Digest / Array Comparison in C#

    - by Erik Karulf
    Hi All, I'm writing an application that needs to verify HMAC-SHA256 checksums. The code I currently have looks something like this: static bool VerifyIntegrity(string secret, string checksum, string data) { // Verify HMAC-SHA256 Checksum byte[] key = System.Text.Encoding.UTF8.GetBytes(secret); byte[] value = System.Text.Encoding.UTF8.GetBytes(data); byte[] checksum_bytes = System.Text.Encoding.UTF8.GetBytes(checksum); using (var hmac = new HMACSHA256(key)) { byte[] expected_bytes = hmac.ComputeHash(value); return checksum_bytes.SequenceEqual(expected_bytes); } } I know that this is susceptible to timing attacks. Is there a message digest comparison function in the standard library? I realize I could write my own time hardened comparison method, but I have to believe that this is already implemented elsewhere.

    Read the article

  • How to receive the text which was sent by SendText

    - by thillai-selvan
    In asterisk I have sent a text message using SendText as follows I have two registered users in sip.conf file. sip.conf details [thillai] username=thillai secret=thillai host=dynamic type=friend allow=all context=test [selvan] username=selvan secret=selvan allow=all host=dynamic type=friend context=test Then I have created the necessary extensions like this extensions.conf file: [test] exten = 677,1,BackGround(thankyou) exten = 677,n,Dial(SIP/thillai) exten = 677,n,SendText('this is for testing') So when a caller is trying to call to extension 677 this text information will be sent. My question is how can I receive this text in the caller side? Any help will be much appreciated.

    Read the article

  • Rails send mail with GMail

    - by Danny McClelland
    Hi Everyone, I am on rails 2.3.5 and have the latest Ruby installed and my application is running well, except, GMail emails. I am trying to setup my gmail imap connection which has worked previously but now doesnt want to know. This is my code: # Be sure to restart your server when you modify this file # Uncomment below to force Rails into production mode when # you don't control web/app server and can't set it the proper way # ENV['RAILS_ENV'] ||= 'production' # Specifies gem version of Rails to use when vendor/rails is not present RAILS_GEM_VERSION = '2.3.5' unless defined? RAILS_GEM_VERSION # Bootstrap the Rails environment, frameworks, and default configuration require File.join(File.dirname(__FILE__), 'boot') Rails::Initializer.run do |config| # Gems config.gem "capistrano-ext", :lib => "capistrano" config.gem "configatron" # Make Time.zone default to the specified zone, and make Active Record store time values # in the database in UTC, and return them converted to the specified local zone. config.time_zone = "London" # The internationalization framework can be changed to have another default locale (standard is :en) or more load paths. # All files from config/locales/*.rb,yml are added automatically. # config.i18n.load_path << Dir[File.join(RAILS_ROOT, 'my', 'locales', '*.{rb,yml}')] #config.i18n.default_locale = :de # Your secret key for verifying cookie session data integrity. # If you change this key, all old sessions will become invalid! # Make sure the secret is at least 30 characters and all random, # no regular words or you'll be exposed to dictionary attacks. config.action_controller.session = { :session_key => '_base_session', :secret => '7389ea9180b15f1495a5e73a69a893311f859ccff1ffd0fa2d7ea25fdf1fa324f280e6ba06e3e5ba612e71298d8fbe7f15fd7da2929c45a9c87fe226d2f77347' } config.active_record.observers = :user_observer end ActiveSupport::CoreExtensions::Date::Conversions::DATE_FORMATS.merge!(:default => '%d/%m/%Y') ActiveSupport::CoreExtensions::Time::Conversions::DATE_FORMATS.merge!(:default => '%d/%m/%Y') require "will_paginate" ActionMailer::Base.delivery_method = :smtp ActionMailer::Base.smtp_settings = { :enable_starttls_auto => true, :address => "smtp.gmail.com", :port => 587, :domain => "XXXXXXXX.XXX", :authentication => :plain, :user_name => "XXXXXXXXXX.XXXXXXXXXX.XXX", :password => "XXXXX" } But the above just results in an SMTP auth error in the production log. I have read varied reports of this not working in Rails 2.2.2 but nothing for 2.3.5, anyone got any ideas? Thanks, Danny

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

  • How to get the stream of a public Facebook fanpage in php?

    - by Bundy
    Hi, I want to display my public fanpage feed onto my website via the Facebook API without requiring a login. I'm doing this require_once('../includes/classes/facebook-platform/php/facebook.php'); $fb = new Facebook($api_key, $secret); $fb->api_client->stream_get('',$app_id,'0','0','','','','','')); But I get this error Fatal error: Uncaught exception 'FacebookRestClientException' with message 'user id parameter or session key required' in includes/classes/facebook-platform/php/facebookapi_php5_restlib.php:3065 Stack trace: #0 includes/classes/facebook-platform/php/facebookapi_php5_restlib.php(1915): FacebookRestClient->call_method('facebook.stream...', Array) #1 facebook/api.php(12): FacebookRestClient->stream_get('', 13156929019, '0', '0', 30, '', '', '', '') #2 {main} thrown in includes/classes/facebook-platform/php/facebookapi_php5_restlib.php on line 3065 Then I figured, because of 'user id parameter or session key required', to add my user id to the call require_once('../includes/classes/facebook-platform/php/facebook.php'); $fb = new Facebook($api_key, $secret); $fb->api_client->stream_get(502945616,13156929019,$app_id,'0','0','','','','','')); But then I got this error Fatal error: Uncaught exception 'FacebookRestClientException' with message 'Session key invalid or no longer valid' I'm totally clueless :)

    Read the article

  • Scratchable Ticket kind Custom View Item in Android ?

    - by Andhravaala
    Hi All, I need to develop a Instant Lottery game app. I need an idea/procedure to implement Scratchable custom widget similar to instant Lottery Tickets in Android. The requirement is like, the actual content(secret number) should be covered by some image(which indicates scratch area). When the user touch and scratch the image, the image has to disappear slowly and the background content(secret number) should appear accordingly. Please let me know the best way to implement this. I am in real need of it. Thanks in advance.

    Read the article

  • REST authentication S3 like hmac sha1 signature vs symetric data encryption.

    - by coulix
    Hello stackers, I was arguing about an S3 like aproach using authorization hash with a secret key as the seed and some data on the request as the message signed with hmac sha1 (Amazon S3 way) vs an other developer supporting symetric encryption of the data with a secret key known by the emiter and the server. What are the advantage of using signed data with hmac sha1 vs symetric key other than the fact that with the former, we do not need to encrypt the username or password. What would be the hardest to break ? symetric encryption or sha1 hashing at la S3 ? If all big players are using oauth and similar without symetric key it is sure that there are obvious advantages, what are those ?

    Read the article

  • Storing API keys in Android, is obfustication enough?

    - by fredley
    I'm using the Dropbox API. In the sample app, it includes these lines: // Replace this with your consumer key and secret assigned by Dropbox. // Note that this is a really insecure way to do this, and you shouldn't // ship code which contains your key & secret in such an obvious way. // Obfuscation is good. final static private String CONSUMER_KEY = "PUT_YOUR_CONSUMER_KEY_HERE"; final static private String CONSUMER_SECRET = "PUT_YOUR_CONSUMER_SECRET_HERE"; I'm well aware of the mantra 'Secrecy is not Security', and obfuscation really only slightly increases the amount of effort required to extract the keys. I disagree with their statement 'Obfustication is good'. What should I do to protect the keys then? Is obfustication good enough, or should I consider something more elaborate?

    Read the article

  • Asterisk + FreePBX + GoTalk. Inbound route not working.

    - by user289581
    I'm running asterisk 1.6.2.6 and freepbx-2.7.0 My trunk is configured as follows: Outgoing Settings Trunk name: GoTalk Peer Details: host=sip.gotalk.com username=09xxxxxx secret=YNxxxxxx type=peer fromuser=09xxxxxx fromdomain=sip.gotalk.com canreinvite=no insecure=very Incoming Settings User Context: 09xxxxx User Details: username=09xxxxx fromuser=09xxxxx type=peer secret=YNxxxxx insecure=very host=dynamic fromdomain=sip.gotalk.com context=from-pstn Register String: 09xxxxxx:[email protected]/09xxxxxx I have an inbound route called Incoming with DID 09xxxxxx diverted to local extension 200 When I do a sip trace and dial my telephone number 0741xxxxx I just get failure beeps. I never see any SIP traffic from GoTalk to my asterisk server trying to connect the call. Seems I'm not registering correctly for incoming calls because GoTalk aren't sending them to me. I am correct in using the GoTalk username 09xxxxxx as the DID, aren't I ? I've tried using my phone number but it makes no difference.

    Read the article

  • How to avoid my this facebook app api login page?

    - by user1035140
    I got a problem regrading with my apps which is once I go to my apps, it sure will show me a login page instead of allow page? it always display the login page 1st then only display allow page, I had tried other apps, if I am 1st time user, It sure will appear the allow page only, it did not show me the login page. my question is how to I avoid my login page direct go to allow page? here is my login page picture here is my apps link https://apps.facebook.com/christmas_testing/ here is my facebook php jdk api coding <?php $fbconfig['appid' ] = "XXXXXXXXXXXXX"; $fbconfig['secret'] = "XXXXXXXXXXXXX"; $fbconfig['baseUrl'] = "myserverlink"; $fbconfig['appBaseUrl'] = "http://apps.facebook.com/christmas_testing/"; if (isset($_GET['code'])){ header("Location: " . $fbconfig['appBaseUrl']); exit; } if (isset($_GET['request_ids'])){ //user comes from invitation //track them if you need header("Location: " . $fbconfig['appBaseUrl']); } $user = null; //facebook user uid try{ include_once "facebook.php"; } catch(Exception $o){ echo '<pre>'; print_r($o); echo '</pre>'; } // Create our Application instance. $facebook = new Facebook(array( 'appId' => $fbconfig['appid'], 'secret' => $fbconfig['secret'], 'cookie' => true, )); //Facebook Authentication part $user = $facebook->getUser(); $loginUrl = $facebook->getLoginUrl( array( 'scope' => 'email,publish_stream,user_birthday,user_location,user_work_history,user_about_me,user_hometown' ) ); if ($user) { try { // Proceed knowing you have a logged in user who's authenticated. $user_profile = $facebook->api('/me'); } catch (FacebookApiException $e) { //you should use error_log($e); instead of printing the info on browser d($e); // d is a debug function defined at the end of this file $user = null; } } if (!$user) { echo "<script type='text/javascript'>top.location.href = '$loginUrl';</script>"; exit; } //get user basic description $userInfo = $facebook->api("/$user"); function d($d){ echo '<pre>'; print_r($d); echo '</pre>'; } ?

    Read the article

  • how to use facebooks old client library

    - by tushar
    i installed the old library from here:http://pearhub.org/get/facebook-0.1.0.tgz and then extracted the facebook.php file and wrote an index.php file in the same folder with the index .php file just displaying hello "username" but the problem is the index.php when run in browser does not show anythink its blank i have xamp installed on my system please help my index.php code is: require_once 'facebook.php'; $api_key = 'fda501108b3b955bcf0f87a4008bc786'; $secret = '833b35812a3e6b379a5158b2f3f0611f'; $facebook = new Facebook($api_key, $secret); $user_id = $facebook-require_login(); // Greet the currently logged-in user! echo "Hello, !"; ?

    Read the article

  • How To Securly Store Data In MySQL Using AES_ENCRYPT

    - by Justin
    We are storing sensitive data in MySQL, and I want to use AES_ENCRYPT(data, 'my-secret-key-here') which works great. My biggest question is how do I secure the key? Previously I just wast storing the key in a web PHP file, so something like: define("ENCRYPTION_KEY", 'my-secret-key-here'); This really doesn't work though, as our MySQL server and web server are the same physical machine, so if somebody gains access to the server, they can get both the encrypted data stored in MySQL and the key. Any ideas? I am thinking I need to move the key to a separate server, and read it in remotely. Or, what about generating the encryption key dynamically for each piece of data. For example taking the customer_id and running md5 on it, and then using that as the key.

    Read the article

  • How you can extend Tasklists in Fusion Applications

    - by Elie Wazen
    In this post we describe the process of modifying and extending a Tasklist available in the Regional Area of a Fusion Applications UI Shell. This is particularly useful to Customers who would like to expose Setup Tasks (generally available in the Fusion Setup Manager application) in the various functional pillars workareas. Oracle Composer, the tool used to implement such extensions allows changes to be made at runtime. The example provided in this document is for an Oracle Fusion Financials page. Let us examine the case of a customer role who requires access to both, a workarea and its associated functional tasks, and to an FSM (setup) task.  Both of these tasks represent ADF Taskflows but each is accessible from a different page.  We will show how an FSM task is added to a Functional tasklist and made accessible to a user from within a single workarea, eliminating the need to navigate between the FSM application and the Functional workarea where transactions are conducted. In general, tasks in Fusion Applications are grouped in two ways: Setup tasks are grouped in tasklists available to implementers in the Functional Setup Manager (FSM). These Tasks are accessed by implementation users and in general do not represent daily operational tasks that fit into a functional business process and were consequently included in the FSM application. For these tasks, the primary organizing principle is precedence between tasks. If task "Manage Suppliers" has prerequisites, those tasks must precede it in a tasklist. Task Lists are organized to efficiently implement an offering. Tasks frequently performed as part of business process flows are made available as links in the tasklist of their corresponding menu workarea. The primary organizing principle in the menu and task pane entries is to group tasks that are generally accessed together. Customizing a tasklist thus becomes required for business scenarios where a task packaged under FSM as a setup task, is for a particular customer a regular maintenance task that is accessed for record updates or creation as part of normal operational activities and where the frequency of this access merits the inclusion of that task in the related operational tasklist A user with the role of maintaining Journals in General Ledger is also responsible for maintaining Chart of Accounts Mappings.  In the Fusion Financials Product Family, Manage Journals is a task available from within the Journals Menu whereas Chart of Accounts Mapping is available via FSM under the Define Chart of Accounts tasklist Figure 1. The Manage Chart of Accounts Mapping Task in FSM Figure 2. The Manage Journals Task in the Task Pane of the Journals Workarea Our goal is to simplify cross task navigation and allow the user to access both tasks from a single tasklist on a single page without having to navigate to FSM for the Mapping task and to the Journals workarea for the Manage task. To accomplish that, we use Oracle Composer to customize  the Journals tasklist by adding to it the Mapping task. Identify the Taskflow name and path of the FSM Task The first step in our process is to identify the underlying taskflow for the Manage Chart of Accounts Mappings task. We select to Setup and Maintenance from the Navigator to launch the FSM Application, and we query the task from Manage Tasklists and Tasks Figure 3. Task Details including Taskflow path The Manage Chart of Accounts Mapping Task Taskflow is: /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow /CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow We copy that value and use it later as a parameter to our new task in the customized Journals Tasklist. Customize the Journals Page A user with Administration privileges can start the run time customization directly from the Administration Menu of the Global Area.  This customization is done at the Site level and once implemented becomes available to all users with access to the Journals Workarea. Figure 4.  Customization Menu The Oracle Composer Window is displayed in the same browser and the Hierarchy of the page component is displayed and available for modification. Figure 5.  Oracle Composer In the composer Window select the PanelFormLayout node and click on the Edit Button.  Note that the selected component is simultaneously highlighted in the lower pane in the browser. In the Properties popup window, select the Tasks List and Task Properties Tab, where the user finds the hierarchy of the Tasklist and is able to Edit nodes or create new ones. src="https://blogs.oracle.com/FunctionalArchitecture/resource/TL5.jpg" Figure 6.  The Tasklist in edit mode Add a Child Task to the Tasklist In the Edit Window the user will now create a child node at the desired level in the hierarchy by selecting the immediate parent node and clicking on the insert node button.  This process requires four values to be set as described in Table 1 below. Parameter Value How to Determine the Value Focus View Id /JournalEntryPage This is the Focus View ID of the UI Shell where the Tasklist we want to customize is.  A simple way to determine this value is to copy it from any of the Standard tasks on the Tasklist Label COA Mapping This is the Display name of the Task as it will appear in the Tasklist Task Type dynamicMain If the value is dynamicMain, the page contains a new link in the Regional Area. When you click the link, a new tab with the loaded task opens Taskflowid /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/ coaMappings/ui/flow/ CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow This is the Taskflow path we retrieved from the Task Definition in FSM earlier in the process Table 1.  Parameters and Values for the Task to be added to the customized Tasklist Figure 7.   The parameters window of the newly added Task   Access the FSM Task from the Journals Workarea Once the FSM task is added and its parameters defined, the user saves the record, closes the Composer making the new task immediately available to users with access to the Journals workarea (Refer to Figure 8 below). Figure 8.   The COA Mapping Task is now visible and can be invoked from the Journals Workarea   Additional Considerations If a Task Flow is part of a product that is deployed on the same app server as the Tasklist workarea then that task flow can be added to a customized tasklist in that workarea. Otherwise that task flow can be invoked from its parent product’s workarea tasklist by selecting that workarea from the Navigator menu. For Example The following Taskflows  belong respectively to the Subledger Accounting, and to the General Ledger Products.  /WEB-INF/oracle/apps/financials/subledgerAccounting/accountingMethodSetup/mappingSets/ui/flow/MappingSetFlow.xml#MappingSetFlow /WEB-INF/oracle/apps/financials/generalLedger/sharedSetup/coaMappings/ui/flow/CoaMappingsMainAreaFlow.xml#CoaMappingsMainAreaFlow Since both the Subledger Accounting and General Ledger products are part of the LedgerApp J2EE Applicaton and are both deployed on the General Ledger Cluster Server (Figure 8 below), the user can add both of the above taskflows to the  tasklist in the  /JournalEntryPage FocusVIewID Workarea. Note:  both FSM Taskflows and Functional Taskflows can be added to the Tasklists as described in this document Figure 8.   The Topology of the Fusion Financials Product Family. Note that SubLedger Accounting and General Ledger are both deployed on the Ledger App Conclusion In this document we have shown how an administrative user can edit the Tasklist in the Regional Area of a Fusion Apps page using Oracle Composer. This is useful for cases where tasks packaged in different workareas are frequently accessed by the same user. By making these tasks available from the same page, we minimize the number of steps in the navigation the user has to do to perform their transactions and queries in Fusion Apps.  The example explained above showed that tasks classified as Setup tasks, meaning made accessible to implementation users from the FSM module can be added to the workarea of their respective Fusion application. This eliminates the need to navigate to FSM to access tasks that are both setup and regular maintenance tasks. References Oracle Fusion Applications Extensibility Guide 11g Release 1 (11.1.1.5) Part Number E16691-02 (Section 3.2) Oracle Fusion Applications Developer's Guide 11g Release 1 (11.1.4) Part Number E15524-05

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Moving from Tortoise to TFS

    - by MarkPearl
    The Past A few years ago my small software company made the jump from storing code on a shared folder to source code control. At the time we had evaluated a few of the options and settled on Tortoise SVN. The main motivation for going the SVN route was that we found a great plugin for Visual Studio that allowed us to avoid the command prompt for uploading changes (like I said we are windows programmers… command prompt bad!! ) and it was free. Up to now we have been pretty happy with SVN as it removed many of the worries that I had about how safe my code was on a shared folder and also gave us the opportunity to safely have several developers work on the same project at the same time. The only times when we have been unhappy has been when we have had SVN hell days – which pretty much occur when you are doing something out of the norm and suddenly SVN just won’t resolve conflicts or something along those lines. This happens once every 4 or 5 months and is not necessarily a problem caused directly by SVN – but a problem augmented by SVN. When you have SVN hell days you want to curse SVN! With that in mind I recently have been relooking at our source code control. I have explored using GIT and was very impressed by it and have also looked at TFS. From a source code control perspective I don’t want to get into a heated discussion on which one is better – but I do want to mention that I wear two hats in my organization – software developer & manager, and with the manager hat on I tend to sway the TFS route. So when I was given a coupon to test DiscountASP.Net Team Foundation Server Service for a year, I thought it was the perfect opportunity to try TFS in a distributed environment and also make the first step towards having an integrated development management system. Some of the things that appeal to me about DiscountASP’s offering are the following… Basic management / planning facilities like to do lists inside Visual Studio Daily backup of data on the server – we are developers, not IT managers and so the more of this I could outsource the better Distributed solution – all of us work remotely and so this was a big one as well. Registering and Setting Up with DiscountASP.NET The whole registration process was simple and intuitive. The web interface is not the most visually impressive one, but it is functional and a few seconds after I clicked the last submit button a email was sitting in my inbox giving me my control panel username and suggesting that I read the “Getting Started” article. The getting started article was easy to read and understand so no complaints there either. Next to set my dev environment to work. With a few references to the getting started article I had completed the whole setup process in a matter of minutes. Ten minutes after initiating the whole thing I was logged into VS2010 and creating my first TFS project. With the service that I signed up for, I have access for 5 users – which is sufficient for my internal needs. So from what I can tell, to set the rest of us up on the system I just need to supply them with their user credentials and url. My Concerns Resolved 1) Security So, a few concerns I had about the service. First and foremost – is it secure? I would hate for someone to get access to our code and the whole idea of putting it up on the internet is a concern for me. Turning to the Knowledge Base on the DiscountASP website this is one of the first question I can see answered. According to them it is secure. I have extracted their comment below regarding this. Our TFS hosting service is secure. We only accept HTTPS connections ensuring that any client-server data transmission is encrypted. At the network level, all of our systems are protected by multiple Juniper firewalls, Tipping Point's Intrusion Detection System (see Tipping Point's case study of our use here), and we also employ DDoS mitigation to add extra layers of security. Additionally, physical access to the servers is tightly restricted. Please see the security section of this Knowledge Base article for further details. 2) Web Portal Access The other big concern I have is regarding web portal access. In the ideal world I would like to be able to give my end users access to a web portal for reporting bugs etc. When I initially read through the FAQ of the site it mentioned that there was web portal access – but from what I can see this is just for “users”. Since I am limited to 5 users for the account, it would not be practical to set up external users that we could get feedback from on bugs etc. I would be interested if this is possible – and if so if someone could post it in the comments it would be much appreciated. If this isn’t possible, it is a slight let down as we rely heavily on end user feedback to get feedback and it would have been ideal to have gotten this within the service. Other than those two items, I didn’t have any real concerns that were unresolved. So where do I go from here? So time passed by from the initial writing of this post and as work whirred in and out of my inbox I have still not had a proper opportunity to give the service a test run. Recently though things have began to slow down and then surprise surprise I had another SVN Hell day. With that experience I had a new found resolve to get our team on TFS and so today we are going to start to use the service as a team. I am hoping that I do not have TFS hell days – but if I do, I will be sure to write about them. In short - the verdict is still out on whether this service is going to be invaluable to my business or whether it will create more headaches than it is worth BUT I am hopping it will be an invaluable service. I will only really be able to determine that in a few months… till then!

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >