Search Results

Search found 27317 results on 1093 pages for 'website features'.

Page 199/1093 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • How to get local business nationwide exposure? [closed]

    - by guisasso
    here's the situation: This company offers local home services (construction...), but also fabricates many custom items that can be shipped nationally, and even internationally. Since i started working on this website, it has ranked pretty well on alexa global and locally, and i have made many SEO improvements that doubled the visits to the website in 6 months. The website is listed in many different directories (dmoz & etc...), maps (google maps & etc...), business listing sites (yelp & etc..), trade specific websites (angie's list, houzz & etc...), state specific business listings and etc, there are many links to pictures displayed on the website, links to the website itself, i have a google analytics and webmaster tools account, with sitemaps, newsletters, facebook page.... the list goes on and on. All of which have been working pretty well locally. We have had some success with doing business in other states and even other countries, but it is still a pretty small percentage of the market. I also advertise on google adwords locally, and since this would be the obvious answer, my question is: Without paid advertisement, how can i improve the visibility of this local business website nationally to attract customers in all US States?

    Read the article

  • Users can benefit from Session Tracking

    I use to work for a large Dental Plan marketing website a few years ago and they had a large customer-driven website that sold Dental Plans to consumers. Their website started tracking users as soon as they hit their web servers, and then they logged everything they could about the user. There are a lot of benefits for using session tracking for both the user and the website. Users can benefit from session tracking due to the fact that a website can retain pertaining information for the user so that they do not have to re-enter the same information repeatedly. In addition, websites can hold specific items in a cart for each user so that they can pay for all of their  items at once when they are ready to complete their purchases. Websites can also benefit from session tracking because they can determine where a specific user came from and which advertising partner gave them a sale. This information is very useful when deciding on where to spend an advertising budget. There is only one real disadvantage when it comes to session tracking, Users can not really control what is actually tracked by a website. Yes, they can disable cookies and this will help, but that means that no tracking can be done at all. Most sites require users to have cookies enabled in order for users to make purchases or login to their accounts.

    Read the article

  • Core debugger enhancements in VS2010

    Since my team offers "parallel debugging", we refer to the team delivering all the other debugging features as the "core debugger" team. They have published a video of new VS2010 debugger features that I encourage you to watch to find out about enhancements with DataTips, breakpoints, dump debugging (inc. IL interpreter) and Threads window.The raw list of features with short description is also here. Comments about this post welcome at the original blog.

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 2 - Get Hands On Experience)

    - by John Webb
    Our guest blogger, Rebekah Jackson, continues with a series of helpful hints on planning your upgrade to PeopleSoft 9.2. Get Hands-On Experience With a an Easy to Install Demo Image Once you have identified the features you believe can add value (see part 1 of this series here), we recommend that you use our Demo Images to quickly create a PeopleSoft environment where you can try out the new features. The Demo Images are virtual machines that run in VirtualBox.   They contain a fully functioning PeopleSoft environment, including the database, PeopleTools, and the application.  These images can be loaded onto nearly any sized machine that meets the specs outlined in the documentation, including a powerful desktop machine.     The image allows you to very quickly deploy a fully functioning PeopleSoft instance that you can use to explore new features and functions or run a conference room pilot. To take advantage these Demo Images: Go to the Demo Images Home Page (Doc id 1552548.1) on My Oracle Support. Use the " Demo Image Quick Start Guide" and additional documentation links on that page to download and install your desired Demo Image. New Demo Images are posted for each product area approximately every 10 weeks, available shortly after the corresponding patching image. When installed, use the Demo Image to explore the desired features and capabilities. Important Notes: - It is not required to use the Demo Images to evaluate features and functions – any 9.2 instance will support this. We recommend use of the Demo Images because the setup and configuration is dramatically faster than doing a traditional PeopleSoft install. - For those looking to explore new features and capabilities on PeopleSoft 9.1 releases, we have provided virtual machine images using the Oracle Virtual Machine technology. Details and links are available in Oracle’s PeopleSoft Virtualization Products page (Doc id 1538142.1). /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • Cucumber could not find table; but its there. What is going on?

    - by JZ
    I'm working with cucumber and I'm running into difficulties. When I run "cucumber features", I am met with errors, cucumber is unable to find my requests table. What obvious mistake am I making? Thank you in advance! Bash: justin-zollarss-mac-pro:conversion justinz$ cucumber features Using the default profile... /Users/justinz/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement F-- (::) failed steps (::) Could not find table 'requests' (ActiveRecord::StatementInvalid) ./features/article_steps.rb:3 ./features/article_steps.rb:2:in `each' ./features/article_steps.rb:2:in `/^I have requests named (.+)$/' features/manage_articles.feature:7:in `Given I have requests named Foo, Bar' Failing Scenarios: cucumber features/manage_articles.feature:6 # Scenario: Conversion 1 scenario (1 failed) 3 steps (1 failed, 2 skipped) 0m0.154s justin-zollarss-mac-pro:conversion justinz$ Manage_articles.feature: Feature: Manage Articles In order to make sales As a customer I want to make conversions Scenario: Conversion Given I have requests named Foo, Bar When I go to the list of customers Then I should see a new "customer" Article_steps.rb: Given /^I have requests named (.+)$/ do |firsts| firsts.split(', ').each do |first| Request.create!(:first => first) pending # express the regexp above with the code you wish you had end end Then /^I should see a new "([^"]*)"$/ do |arg1| pending # express the regexp above with the code you wish you had end DB schema: ActiveRecord::Schema.define(:version => 20100528011731) do create_table "requests", :force => true do |t| t.string "institution" t.string "website" t.string "type" t.string "users" t.string "first" t.string "last" t.string "jobtitle" t.string "phone" t.string "email" t.datetime "created_at" t.datetime "updated_at" end end

    Read the article

  • 2 Servers 1 Database - Can I use Redis?

    - by Aust
    Ok I have a couple of questions here. First let me give you some background information. I'm starting a project where I have a node.js server running my application and my website running on another normal server. My application will allow multiple users simultaneous connections and updates to the database so Redis seemed like a good fit there because of its speed and atomic functions. For someone to access my application they have to login with an account. To get an account, they have to signup for one through my website. So my website needs a database, but its not important to have a database like Redis here because it doesn't need it. Which leads me to my first question: 1. Can Redis even be used without node.js? It seems like it would be convenient if both of my servers were using the same database to keep track of information. In some cases, they will keep track of the same information (as in user information) and in other cases, they will be keeping track of separate information. So even if the website wouldn't be taking full advantage of all that Redis has to offer it seems like it would be more convenient. So assuming Redis could be used in this situation that leads to my next question: 2. Since Redis is linked with JavaScript, how would I handle the security from my website users? What would be stopping my website users from opening firebug or chrome's inspector and making changes to the database? Maybe if I designed my site with the layout like this: apply.php-update.php-home.php. Where after they submitted their form it would redirect them to the update page where the JavaScript would run and then redirect them after the database updated to the home page. I don't really know I'm just taking shots in the dark at this point. :) Maybe a better alternative would be to have my node.js application access its own Redis database and also have access to another MySQL database that my website also has access to. Or maybe there is another database that would be better suited for this situation other than Redis. Anyways any direction on this matter would be greatly appreciated. :)

    Read the article

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • DNS with name.com and Amazon S3

    - by aledalgrande
    I have a website on a bucket in Amazon S3, and recently started to get emails from Google "Googlebot can't access your site". When I go to Webmaster Tools and I try to fetch in fact it doesn't work. Also people in locations different from mine sometimes reported they could not access the website. Now for curiosity I tried from my terminal: $ host xxx xxx is an alias for xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com is an alias for s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com has address yyy.yyy.yyy.yyy And when I try with dig: $ dig xxx ; <<>> DiG 9.8.3-P1 <<>> xxx ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17860 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;xxx. IN A ;; ANSWER SECTION: xxx. 300 IN CNAME xxx.s3-website-us-west-1.amazonaws.com. xxx.s3-website-us-west-1.amazonaws.com. 60 IN CNAME s3-website-us-west-1.amazonaws.com. s3-website-us-west-1.amazonaws.com. 60 IN A yyy ;; Query time: 1514 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 12:32:13 2014 ;; MSG SIZE rcvd: 127 It seems OK to me. Why would Google tell me there is a DNS error? UPDATE: Google also cannot fetch robots.txt, but I can fetch it from my browser. UPDATE 2: I have a forwarding on the root to the www.* hostname: $ dig thenifty.me ; <<>> DiG 9.8.3-P1 <<>> thenifty.me ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49286 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;thenifty.me. IN A ;; AUTHORITY SECTION: thenifty.me. 300 IN SOA ns1hwy.name.com. support.name.com. 1 10800 3600 604800 300 ;; Query time: 148 msec ;; SERVER: 75.75.75.75#53(75.75.75.75) ;; WHEN: Fri Aug 22 13:32:56 2014 ;; MSG SIZE rcvd: 88

    Read the article

  • Alternative to google map api, so that I can use it on a HTTPS/SSL encrypted website.

    - by Zeeshan Rang
    I did find a solution for this on Google map api page, and I made the following changes as mentioned in it. 1.Use Google Maps API for Flash version 1.9a or later. 2.Add the following to your Flash application before the map is instantiated: Security.allowInsecureDomain("maps.googleapis.com"); Ref:http://code.google.com/apis/maps/faq.html#flash_ssl My code looks like this, after the changes: <mx:TitleWindow verticalAlign="middle" horizontalAlign="center" xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:maps="com.google.maps.*" width="1000" height="600" layout="absolute" backgroundAlpha="0" borderAlpha="0" borderThickness="0" showCloseButton="true" close="PopUpManager.removePopUp(this);"> <mx:VBox width="70%" height="100%" > <maps:Map id="map" key="ABQIAAAA0L1JEoR6rWjh-BBQnLMtMBSVuZ5VlaqlIqiYPFMK_I5M2UTmHhSq_BJxLHiYcTDW9RxSF6HewNY7uA" mapevent_mapready="onMapReady(event)" width="100%" height="100%" /> </mx:VBox> <mx:Script> <![CDATA[ //import flashx.textLayout.formats.Direction; import mx.effects.AddItemAction; //import flashx.textLayout.factory.TruncationOptions; import mx.controls.Alert; import mx.managers.PopUpManager; import mx.rpc.events.ResultEvent; import com.adobe.serialization.json.JSON; import flash.events.Event; import com.google.maps.*; import com.google.maps.overlays.*; import com.google.maps.services.*; import com.google.maps.controls.ZoomControl; import com.google.maps.controls.PositionControl; import com.google.maps.controls.MapTypeControl; import com.google.maps.services.ClientGeocoderOptions; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapMouseEvent; import com.google.maps.MapType; import com.google.maps.services.ClientGeocoder; import com.google.maps.services.GeocodingEvent; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.InfoWindowOptions; private function onMapReady(event:MapEvent):void { Security.allowInsecureDomain("maps.googleapis.com"); map.setCenter(new LatLng(41.651505,-72.094455), 13, MapType.NORMAL_MAP_TYPE); map.addControl(new ZoomControl()); map.addControl(new PositionControl()); map.addControl(new MapTypeControl()); map.enableScrollWheelZoom(); map.enableContinuousZoom(); } ]]> </mx:Script> </mx:TitleWindow> But i still get the following error using this: The requested URL /mapsapi/publicapi?file=flashapi&url=https%3A%2F%2Fvirtual.c7beta.com%2Findex_cloud.swf&key=ABQIAAAA0L1JEoR6rWjh-BBQnLMtMBTW_Qkp6J0z76Etz3qzo8Hg3HdUQhSnD6lqp53NB0UrBmg5Xm2DlazWqA&v=1.18&flc=xt was not found on this server. Any suggestions to what am I doing wrong here, what should i do to make this work. Regards zee

    Read the article

  • Why this strange behavior of sqlbulkcopy in a asp.net website running under iis?

    - by Pandiya Chendur
    I'm using SqlClient.SqlBulkCopy to try and bulk copy a csv file into a database. I am getting the following error after calling the ..WriteToServer method. "The given value of type String from the data source cannot be converted to type bit of the specified target column." Here is my code, dt.Columns.Add("IsDeleted", typeof(byte)); dt.Columns.Add(new DataColumn("CreatedDate", typeof(DateTime))); foreach (DataRow dr in dt.Rows) { if (dr["MobileNo2"] == "" && dr["DriverName2"] == "") { dr["MobileNo2"] = null; dr["DriverName2"] = ""; } dr["IsDeleted"] = Convert.ToByte(0); dr["CreatedDate"] = Convert.ToDateTime(System.DateTime.Now.ToString()); } string connectionString = System.Configuration.ConfigurationManager. ConnectionStrings["connectionString"].ConnectionString; SqlBulkCopy sbc = new SqlBulkCopy(connectionString); sbc.DestinationTableName = "DailySchedule"; sbc.ColumnMappings.Add("WirelessId", "WirelessId"); sbc.ColumnMappings.Add("RegNo", "RegNo"); sbc.ColumnMappings.Add("DriverName1", "DriverName1"); sbc.ColumnMappings.Add("MobileNo1", "MobileNo1"); sbc.ColumnMappings.Add("DriverName2", "DriverName2"); sbc.ColumnMappings.Add("MobileNo2", "MobileNo2"); sbc.ColumnMappings.Add("IsDeleted", "IsDeleted"); sbc.ColumnMappings.Add("CreatedDate", "CreatedDate"); sbc.WriteToServer(dt); sbc.Close(); There is no error when running under visual studio developement server but it gives me an error when running under iis..... Here is my sql server table details, [Id] [int] IDENTITY(1,1) NOT NULL, [WirelessId] [int] NULL, [RegNo] [nvarchar](50) NULL, [DriverName1] [nvarchar](50) NULL, [MobileNo1] [numeric](18, 0) NULL, [DriverName2] [nvarchar](50) NULL, [MobileNo2] [numeric](18, 0) NULL, [IsDeleted] [tinyint] NULL, [CreatedDate] [datetime] NULL,

    Read the article

  • log4Net EventlogAppender does not work for Asp.Net 2.0 WebSite?

    - by Amitabh
    I have configured log4Net EventLogAppender for Asp.Net 2.0. However it does not log anything. I have following in my Web.Config. <log4net> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender"> <param name="LogName" value="Test Log" /> <param name="ApplicationName" value="Test-Web" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <priority value="ERROR"/> <appender-ref ref="EventLogAppender"/> </root> <logger name="NHibernate"> <level value="ERROR" /> <appender-ref ref="EventLogAppender" /> </logger> </log4net> I already have Test-Log Event Log created and AspNet user has permission on the Event Log registry entry. I also have log4Net configured in Global.asax Application_Start. log4net.Config.XmlConfigurator.Configure();

    Read the article

  • Cucumber failed scenarios not providing details (Ubuntu)

    - by user361646
    When I run cucumber from my Ubuntu server I don't get details on why the scenario is failing. For example here is what I get: ..... cucumber features/messaging.feature:6 # Scenario: Joe can view his inbox cucumber features/messaging.feature:14 # Scenario: Joe can send a message cucumber features/messaging.feature:26 # Scenario: Joe can view a message in his inbox cucumber features/messaging.feature:35 # Scenario: Joe can reply to a message ..... Is there something I need to configure or pass to the cucumber command to see the details of the failed scenarios??

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >