Search Results

Search found 17826 results on 714 pages for 'oracle news'.

Page 507/714 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • ORACLE:- 'SELECT ODER BY ASC' but 'USA' always first.

    - by Robert
    I have to write a drop down query for countries. But USA hould always be first. The rest of the countries are in alphabetical order I tried the following query SELECT countries_id ,countries_name FROM get_countries WHERE countries_id = 138 UNION SELECT countries_id ,countries_name FROM get_countries WHERE countries_id != 138 ORDER BY 2 ASC

    Read the article

  • SUSE 12.1 Apache startup after oci8 installation

    - by DKSan
    I have got a virtual server running opensuse 11.4 with apache, php, oracle instantclient, and oci installed through pecl. The steps it took for me to have it up and running on 11.4 were: # Install instantclient rpm -Uvh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm rpm -Uvh oracle-instantclient11.2-devel-11.2.0.2.0.x86_64.rpm # Install OCI8 through pecl pecl install oci8 # add oci8 to modules vi /etc/php5/conf.d/oci8.ini extension=oci8.so # add LD_LIBRARY_PATH to apache vi /etc/sysconfig/apache2 # add to bottom of script export LD_LIBRARY_PATH="/usr/lib/oracle/11.2/client64/lib" # restart Apache /etc/init.d/apache2 restart Celebrating the same procedure on a fresh installation of OpenSUSE 12.1 results in apache throwing the following message at startup: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php5/extensions/oci8.so' - libnnz11.so: cannot open shared object file: No such file or directory in Unknown on line 0 I can't get any explanation, why it is working for 11.4 and in 12.1 it stops working. Can someone please point me in the right direction..

    Read the article

  • GConf error and gnome does not load properly in RHEL 5.3

    - by Tim
    Hello, I am using Red Hat Enterprise Linux 5.3 . I created a user oracle on the system, using the following command useradd -g oinstall -G dba,oper -d /home/oracle oracle Now, when i try to login as the user oracle, GNOME does not load properly and i get popup box error message like the following GConf error:Failed to contact configuration server;some possible causes are that you need to enable TCP/IP for ORBit,or your have NFS locks due to a system crash.(Details-/:IOR file'/tmp/gcofd-cheetahman/tock/ior' not opened successfully,no gconfd located:Permission denied 2: IOR file /tmp/gconfd-cheetahman/lock/ior not opened succesfully no gconfd located: Permission denied) Any way to fix this ? Thank You

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

  • Virtual Machine with Bridged Adapter to Centos not accepting ssh from host machine [migrated]

    - by javadba
    I have a bridged connection on VirtualBox from os/x 10.8.5 host to Centos 5.8 client. But I suspect this is more of a general issue than specific to the host and precise version of linux. Shown below are the networking info from the VirtualBox and from within the guest sshd is running on port 22: [root@oracle-linux ~]# ps -ef | grep sshd | grep -v grep root 3103 1 0 20:22 ? 00:00:00 /usr/sbin/sshd root 14994 3103 0 21:23 ? 00:00:00 sshd: root@pts/1 Port 22 listening: [root@oracle-linux ~]# netstat -an | grep 22 | grep tcp | grep LIST tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN Here are ip addresses, still on the guest os: [root@oracle-linux ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b9:e5:79 brd ff:ff:ff:ff:ff:ff inet 10.0.15.100/24 brd 10.0.15.255 scope global eth0 inet6 fe80::a00:27ff:feb9:e579/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b4:86:8a brd ff:ff:ff:ff:ff:ff inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1 inet6 fe80::a00:27ff:feb4:868a/64 scope link valid_lft forever preferred_lft forever [root@oracle-linux ~]# I can ssh to the guest from the guest: root@oracle-linux ~]# ssh 10.0.3.15 The authenticity of host '10.0.3.15 (10.0.3.15)' can't be established. RSA key fingerprint is ef:08:19:72:95:4d:e5:28:af:f3:6f:54:07:84:ba:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.3.15' (RSA) to the list of known hosts. [email protected]'s password: Last login: Mon Oct 21 21:24:12 2013 from 10.0.15.100 But can NOT ssh from the host to the guest: 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection Here is bridged connection infO; BTW I looked into other answers, and one of them mentioned doing service iptables stop That did not help. Adapter 2 is a NAT, shown below In case NAT is causing any issues, i shut it down and restarted networking. [root@oracle-linux ~]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: Still No joy.. 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection

    Read the article

  • What tool can record multiple parallel stream to files of defined size?

    - by Hauke
    I would like to record record multiple audio web streams like this one in parallel to an mp3 or wma file for a duration of several days. I would like to be able to limit the file size or the duration stored in each file. The tool can be for any operating system. I do not need anything fancy like song recognition, metadata or silence detection. I haven't been able to find such a piece of software so far. Example: Tap channel "News" results in: News-090902-0000-0100.mp3, News-090902-0100-0200.mp3, etc... Who knows what tool can do this? It can be commercial software. Link in fulltext: 88.84.145.116:8000/listen.pls

    Read the article

  • Solaris Administration Web GUI?

    - by Robert C
    I recently installed Solaris 11 x86 text install (http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html?ssSourceSiteId=ocomen) to be used as a file server running ZFS. I noticed that I'm given the bare minimum in terms of packages. Is there an official oracle web GUI for managing ZFS? I ran a netstat and it doesn't appear to have installed any webserver thats listening. I saw something from a couple years ago, but apparently it's not packaged or maintained anymore (https://blogs.oracle.com/talley/entry/manage_zfs_from_your_browser). I tried pkg install network-console, but it says that the package isn't available for my platform. Any ideas? I'd like to stick with Oracle Solaris instead of the open source alternatives, if possible.

    Read the article

  • Will a database server perform better running on 2 CPUs with 16 cores or 4 CPUs with 8 cores?

    - by AlexOdin
    What I have: an online financial application (ASP.NET, C#) at peak we have 5K+ simultaneous users backend is running on Oracle 11g (active server + stand-by using Active Data Guard). At peak - 4K-5K database sessions Oracle is installed on Linux 5.8 (Oracle's unbreakable version) the database size: 7TB disk storage: NetApp (connected with 10GB network) I would like to replace old servers (IT will purchase HP blades BL685C). Servers will have 256GB of RAM. I need your help to figure out what to do with CPUs and cores. Options: 2 CPUs (2.3 GHz) with 16 cores each 4 CPUs (3.0 GHz) with 8 cores each Question: Which one should I pick? P.S. Next year, we will migrate from Oracle to SQL server. I hope, whatever option you recommend will work for both platforms

    Read the article

  • How to block access to websites on a linux box [closed]

    - by user364952
    I'm just curious how many ways people can come up with to block access to a website. I simply intended to block (my own) access to news.ycombinator.com to stop my productivity drain. The first two methods I thought of were editing the hosts file to resolve news.ycombinator.com to 0.0.0.0 or adding a rule to iptables. iptables -A OUTPUT -d news.ycombinator.com -j REJECT Disclaimer: I do realise that the above two methods are easily by-passable. Doesn't help this is my own machine. What other ways does the internet know?

    Read the article

  • Virtual Machine with Bridged Adapter to Centos not accepting ssh from host machine

    - by javadba
    I have a bridged connection on VirtualBox from os/x 10.8.5 host to Centos 5.8 client. But I suspect this is more of a general issue than specific to the host and precise version of linux. Shown below are the networking info from the VirtualBox and from within the guest sshd is running on port 22: [root@oracle-linux ~]# ps -ef | grep sshd | grep -v grep root 3103 1 0 20:22 ? 00:00:00 /usr/sbin/sshd root 14994 3103 0 21:23 ? 00:00:00 sshd: root@pts/1 Port 22 listening: [root@oracle-linux ~]# netstat -an | grep 22 | grep tcp | grep LIST tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN tcp 0 0 :::22 :::* LISTEN Here are ip addresses, still on the guest os: [root@oracle-linux ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b9:e5:79 brd ff:ff:ff:ff:ff:ff inet 10.0.15.100/24 brd 10.0.15.255 scope global eth0 inet6 fe80::a00:27ff:feb9:e579/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:b4:86:8a brd ff:ff:ff:ff:ff:ff inet 10.0.3.15/24 brd 10.0.3.255 scope global eth1 inet6 fe80::a00:27ff:feb4:868a/64 scope link valid_lft forever preferred_lft forever [root@oracle-linux ~]# I can ssh to the guest from the guest: root@oracle-linux ~]# ssh 10.0.3.15 The authenticity of host '10.0.3.15 (10.0.3.15)' can't be established. RSA key fingerprint is ef:08:19:72:95:4d:e5:28:af:f3:6f:54:07:84:ba:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.3.15' (RSA) to the list of known hosts. [email protected]'s password: Last login: Mon Oct 21 21:24:12 2013 from 10.0.15.100 But can NOT ssh from the host to the guest: 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection Here is bridged connection infO; BTW I looked into other answers, and one of them mentioned doing service iptables stop That did not help. Adapter 2 is a NAT, shown below In case NAT is causing any issues, i shut it down and restarted networking. [root@oracle-linux ~]# /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: Still No joy.. 18:27:04/shared:11 $ssh [email protected] ssh: connect to host 10.0.15.100 port 22: Operation timed out lost connection

    Read the article

  • LDAP :Failed to find add in mandatory or optional attribute list

    - by Manju Prabhu
    I am trying to import an ldif file which has following content- DN: cn=myUser,cn=Users,dc=us,dc=oracle,dc=com objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetorgperson objectclass: orcluser objectclass: orcluserV2 cn: myUser givenname: myUser mail: myUser orclsamaccountname: myUser sn: myUser uid: myUser userpassword:: somepassword dn: cn=Administrator,cn=Groups,dc=us,dc=oracle,dc=com objectclass: person changetype: modify add: uniquemember uniquemember: cn=myUser,cn=Users,dc=us,dc=oracle,dc=com When I do this, LDAP throws follwing error javax.naming.directory.SchemaViolationException: [LDAP: error code 65 - Failed to find add in mandatory or optional attribute list.]; remaining name 'cn=Administrator,cn=Groups,dc=us,dc=oracle,dc=com' The user gets imported, but it is not added to the group(Group exists). What am i missing ?

    Read the article

  • Setfacl configuration issue in Linux

    - by Balualways
    I am configuring a Linux Server with ACL[Access Control Lists]. It is not allowing me to perform setfacl operation on one of the directoriy /xfiles. I am able to perform the setfacl on other directories as /tmp /op/applocal/. I am getting the error as : root@asifdl01devv # setfacl -m user:eqtrd:rw-,user:feedmgr:r--,user::---,group::r--,mask:rw-,other:--- /xfiles/change1/testfile setfacl: /xfiles/change1/testfile: Operation not supported I have defined my /etc/fstab as /dev/ROOTVG/rootlv / ext3 defaults 1 1 /dev/ROOTVG/varlv /var ext3 defaults 1 2 /dev/ROOTVG/optlv /opt ext3 defaults 1 2 /dev/ROOTVG/crashlv /var/crash ext3 defaults 1 2 /dev/ROOTVG/tmplv /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/ROOTVG/swaplv swap swap defaults 0 0 /dev/APPVG/home /home ext3 defaults 1 2 /dev/APPVG/archives /archives ext3 defaults 1 2 /dev/APPVG/test /test ext3 defaults 1 2 /dev/APPVG/oracle /opt/oracle ext3 defaults 1 2 /dev/APPVG/ifeeds /xfiles ext3 defaults 1 2 I have a solaris server where the vfstab is defined as cat vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/vx/dsk/bootdg/swapvol - - swap - no - swap - /tmp tmpfs - yes size=1024m /dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no logging /dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no logging /dev/vx/dsk/bootdg/home /dev/vx/rdsk/bootdg/home /home ufs 2 yes logging /dev/vx/dsk/APP/test /dev/vx/rdsk/APP/test /test vxfs 3 yes - /dev/vx/dsk/APP/archives /dev/vx/rdsk/APP/archives /archives vxfs 3 yes - /dev/vx/dsk/APP/oracle /dev/vx/rdsk/APP/oracle /opt/oracle vxfs 3 yes - /dev/vx/dsk/APP/xfiles /dev/vx/rdsk/APP/xfiles /xfiles vxfs 3 yes - I am not able to find out the issue. Any help would be appreciated.

    Read the article

  • ODSI + weblogic = JDBC problem

    - by Giuseppe Di Federico
    I'm currently developing a web service using ODSI through Oracle Workshop for WebLogic (ex AquaLogic). I created a datasource on weblogic using the driver "Oracle thin driver 10g", the test succeed on WebLogic. (My Database is Oracle 10 10.2.0.1.0) The problem occours when I try to create the Phisical Data Service in the Oracle Workshop. I choose the following options: Data source type = Relational Data source = [THE CORRECT NAME OF THE SOURCE SET ON WEBLOGIC] Database type = ??? Aqualogic, doesn't allow me to select the database type. I guess is a problem related to the driver set on weblogic... but I ain't sure.Does someone know the nature of my problem ? Tnx

    Read the article

  • Ubuntu: move logs from /dev/tty8 to different terminal /dev/tty12 or get rid of it.

    - by Casual Coder
    I want to know how to move or get rid of /dev/tty8 log output in Ubuntu 9.10. /dev/tty7 is my regular X session. When I am switching user to test account where I can try and test setups and configs I am at next available console i.e. /dev/tty9 because /dev/tty8 is taken by log output. Where can I configure this ? All I've found related to /dev/tty8 is commented lines in /etc/rsyslog.d/50-default.conf. I changed it like that: daemon,mail.*;\ news.=crit;news.=err;news.=notice;\ *.=debug;*.=info;\ *.=notice;*.=warn /dev/tty12 And I've got nice log output on /dev/tty12 but where is configuration for log output on /dev/tty8. How can I change it?

    Read the article

  • Mac Mail.app & Smart Mailbox conditions

    - by Michael Birchall
    Hey Guys, I'm having some trouble with my Smart Mailbox setup. I've got a Smart Mailbox named "Unread". And Contains messages that match any of the following conditions: Message is Unread Message is not in Mailbox News (AT) lovejungle.com Message is not in Mailbox Info (AT) lovejungle.com For some reasons, it is still displaying messages in either News@ or Info@. I've removed either Not in News@ or Info@ and it still shows messages from each inbox. Any ideas on what's set up wrong? :confused: http://i.imgur.com/IHquL.jpg

    Read the article

  • added shell script to sudoers still getting permission denied

    - by Bill S
    I don't understand this? Other uses of sudo work fine. [oracle@o plugins]$ su Password: [root@ plugins]# su nrpe bash-3.2$ /home/oracle/obiee/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh bash: /home/oracle/obiee/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh: Permission denied bash-3.2$ sudo -l Matching Defaults entries for nrpe on this host: env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY" Runas and Command-specific defaults for nrpe: User nrpe may run the following commands on this host: (ALL) NOPASSWD: /home/oracle/obiee/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh bash-3.2$

    Read the article

  • Jquery removeClass Not Working

    - by Wade D Ouellet
    Hey, My site is here: http://treethink.com What I have going is a news ticker on the right that is inside a jquery function. The function starts right away and extracts the news ticker and then retracts it as it should. When a navigation it adds a class to the div, which the function then checks for to see if it should stop extracting/retracting. This all works great. The problem I am having is that after the close button is clicked (in the content window), the removeClass won't work. This means that it keeps thinking the window is open and therefore the conditional statement inside the function won't let it extract and retract again. With a simple firebug check, it's showing that the class isn't being removed period. It's not the cboxClose id that it's not finding because I tried changing that to just "a" tags in general and it still wouldn't work so it's something to do with the jQuery for sure. Someone also suggested a quick alert() to check if the callback is working but I'm not sure what this is. Here is the code: /* News Ticker */ /* Initially hide all news items */ $('#ticker1').hide(); $('#ticker2').hide(); $('#ticker3').hide(); var randomNum = Math.floor(Math.random()*3); /* Pick random number */ newsTicker(); function newsTicker() { if (!$("#ticker").hasClass("noTicker")) { $("#ticker").oneTime(2000,function(i) { /* Do the first pull out once */ $('div#ticker div:eq(' + randomNum + ')').show(); /* Select div with random number */ $("#ticker").animate({right: "0"}, {duration: 800 }); /* Pull out ticker with random div */ }); $("#ticker").oneTime(15000,function(i) { /* Do the first retract once */ $("#ticker").animate({right: "-450"}, {duration: 800}); /* Retract ticker */ $("#ticker").oneTime(1000,function(i) { /* Afterwards */ $('div#ticker div:eq(' + (randomNum) + ')').hide(); /* Hide that div */ }); }); $("#ticker").everyTime(16500,function(i) { /* Everytime timer gets to certain point */ /* Show next div */ randomNum = (randomNum+1)%3; $('div#ticker div:eq(' + (randomNum) + ')').show(); $("#ticker").animate({right: "0"}, {duration: 800}); /* Pull out right away */ $("#ticker").oneTime(15000,function(i) { /* Afterwards */ $("#ticker").animate({right: "-450"}, {duration: 800});/* Retract ticker */ }); $("#ticker").oneTime(16000,function(i) { /* Afterwards */ /* Hide all divs */ $('#ticker1').hide(); $('#ticker2').hide(); $('#ticker3').hide(); }); }); } else { $("#ticker").animate({right: "-450"}, {duration: 800}); /* Retract ticker */ $("#ticker").oneTime(1000,function(i) { /* Afterwards */ $('div#ticker div:eq(' + (randomNum) + ')').hide(); /* Hide that div */ }); $("#ticker").stopTime(); } } /* when nav item is clicked re-run news ticker function but give it new class to prevent activity */ $("#nav li").click(function() { $("#ticker").addClass("noTicker"); newsTicker(); }); /* when close button is clicked re-run news ticker function but take away new class so activity can start again */ $("#cboxClose").click(function() { $("#ticker").removeClass("noTicker"); newsTicker(); }); Thanks, Wade

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • Facebook: Hide Your Status Updates From Your Boss/Ex or Any Specific Friend

    - by Gopinath
    Sometime we want to hide our status updates from specific people who are already accepted as Friends in Facebook. Do you wonder why we need to accept someone as friend and then hide status updates from them? Well, may be you have to accept a friend request from your boss, but certainly love to hide status updates as well as other Facebook activities from him. Something similar goes with few annoying friends whom you cant’ de-friend but like to hide your updates. Thanks to Facebook for providing fine grain privacy options on controlling what we want to share and with whom we want to share. It’s very easy to block one or more specific friends from seeing your status updates. Here are the step by step instructions: 1. Login to Facebook and go to Privacy Settings Page. It shows a page something similar to what is shown in the below image. 2. Click on “Customize settings” link 3. Expand the privacy options available in the section Things I Share -> Posts by me. Choose Customise from the list of available options   4. Type the list of unwanted friend’s names in to input box of the section "Hide this from”. Here is a screen grab of couple of my friends whom I added for writing this post 5. Click the Save Settings button. That’s all. Facebook will ensure that these people will not see your status updates on their news feed. Enjoyed the Facebook Tip? Join Tech Dreams on Facebook to read all our blog posts on your Facebook’s news feed. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • About Entitlement Grants in ADF Security of JDeveloper 11.1.1.4

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle JDeveloper 11.1.1.4 comes with a new ADF Security feature called "entitlement grants". This has nothing to do with Oracle Entitlement Server (OES) but is the ability to group resources into permission sets so they can be granted with a single grant statement. For example, as good practices when organizing your projects, you may have grouped your bounded task flows by functionality and responsibility in sub folders under the WEB-INF directory. If one of the folders holds bounded task flows that are accessible to all authenticated users, you may create an entitlement grant allAuthUserBTF and select all bounded task flows that are accessible for authenticated users as resources. You can then grant allAuthUserBTF to the authenticated-role so that with only a single grant statement all selected bounded task flows are protected. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <permission-sets>         <permission-set>             <name>PublicBoundedTaskFlows</name>             <member-resources>               <member-resource>                 <resource-name>                      /WEB-INF/public/home-btf.xml#home-btf                 </resource-name>                 <type-name-ref>TaskFlowResourceType</type-name-ref>                 <display-name> ... </display-name>                 <actions>view</actions>               </member-resource>               <member-resource>                 <resource-name>                         /WEB-INF/public/preferences-btf.xml#preferences-btf                </resource-name>                 <type-name-ref>TaskFlowResourceType</type-name-ref>                 <display-name>...</display-name>                 <actions>view</actions>               </member-resource>             </member-resources>           </permission-set>   </permission-sets> The grant statement for this permission set is added as shown below <grant>   <grantee>     <principals>        <principal>             <name>authenticated-role</name>             <class>oracle.security.jps.internal.core.principals.JpsAuthenticatedRoleImpl</class>         </principal>       </principals>     </grantee>     <permission-set-refs>         <permission-set-ref>            <name>PublicBoundedTaskFlows</name>         </permission-set-ref>      </permission-set-refs> </grant>

    Read the article

  • New database profiling support in ANTS Performance Profiler

    - by Ben Emmett
    In May last year, the ANTS Performance Profiler team added the ability to profile database requests your application makes to SQL Server or Oracle. The really cool thing is that you’re shown those requests in the application’s call tree, so you can see what .NET code caused those queries to run. It’s particularly helpful if you’re using an ORM which automagically generates and runs queries for you, but which doesn’t necessarily do it in the most efficient way possible. Now by popular demand, we’ve added support for profiling MySQL (or MariaDB) and PostgreSQL, so you can see queries run against those databases too. Some of you have also said that you’re using the Devart dotConnect data providers instead of the native .NET ones, so we’ve added support for those drivers too. Hope it helps! For the record, here’s a list of supported connectors (ones in bold are new): SQL Server .NET Framework Data Provider Devart dotConnect for SQL Server Oracle .NET Framework Data Provider Oracle Data Provider for .NET Devart dotConnect for Oracle MySQL / MariaDB MySQL Connector/Net Devart dotConnect for MySQL PostgreSQL Npgsql .NET Data Provider for PostgreSQL Devart dotConnect for PostgreSQL SQL Server Compact Edition .NET Framework Data Provider for SQL Server Compact Edition Devart dotConnect for SQL Server Pro Have we missed a connector or database which you’d find useful? Tell us about it in the comments or by emailing [email protected]. Ben

    Read the article

  • Using the OAM Mobile & Social SDK to secure native mobile apps - Part 2 : OAM Mobile & Social Server configuration

    - by kanishkmahajan
    Objective  In the second part of this blog post I'll now cover configuration of OAM to secure our sample native apps developed using the iOS SDK. First, here are some key server side concepts: Application Profiles: An application profile is a logical representation of your application within OAM server. It could be a web (html/javascript) or native (iOS or Android) application. Applications may have different requirements for AuthN/AuthZ, and therefore each application that interacts with OAM Mobile & Social REST services must be uniquely defined. Service Providers: Service providers represent the back end services that are accessed by applications. With OAM Mobile & Social these services are in the areas of authentication, authorization and user profile access. A Service Provider then defines a type or class of service for authentication, authorization or user profiles. For example, the JWTAuthentication provider performs authentication and returns JWT (JSON Web Tokens) to the application. In contrast, the OAMAuthentication also provides authentication but uses OAM SSO tokens Service Profiles:  A Service Profile is a logical envelope that defines a service endpoint URL for a service provider for the OAM Mobile & Social Service. You can create multiple service profiles for a service provider to define token capabilities and service endpoints. Each service provider instance requires atleast one corresponding service profile.The  OAM Mobile & Social Service includes a pre-configured service profile for each pre-configured service provider. Service Domains: Service domains bind together application profiles and service profiles with an optional security handler. So now let's configure the OAM server. Additional details are in the OAM Documentation and this post simply provides an outline of configuration tasks required to configure OAM for securing native apps.  Configuration  Create The Application Profile Log on to the Oracle Access Management console and from System Configuration -> Mobile and Social -> Mobile Services, select "Create" under Application Profiles. You would do this  step twice - once for each of the native apps - AvitekInventory and AvitekScheduler. Enter the parameters for the new Application profile: Name:  The application name. In this example we use 'InventoryApp' for the AvitekInventory app and 'SchedulerApp' for the AvitekScheduler app. The application name configured here must match the application name in the settings for the deployed iOS application. BaseSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM server.  Mobile Configuration: Enable this checkbox for any mobile applications. This enables the SDK to collect and send Mobile specific attributes to the OAM server.  Webview: Controls the type of browser that the iOS application will use. The embedded browser (default) will render the browser within the application. External will use the system standalone browser. External can sometimes be preferable for debugging URLScheme: The URL scheme associated with the iOS apps that is also used as a custom URL scheme to register O/S handlers that will take control when OAM transfers control to device. For the AvitekInventory and the AvitekScheduler apps I used osa:// and client:// respectively. You set this scheme in Xcode while developing your iOS Apps under Info->URL Types.  Bundle Identifier : The fully qualified name of your iOS application. You typically set this when you create a new Xcode project or under General->Identity in Xcode. For the AvitekInventory and AvitekScheduler apps these were com.us.oracle.AvitekInventory and com.us.oracle.AvitekScheduler respectively.  Create The Service Domain Select create under Service domains. Create a name for your domain (AvitekDomain is what I've used). The name configured must match the service domain set in the iOS application settings. Under "Application Profile Selection" click the browse button. Choose the application profiles that you created in the previous step one by one. Set the InventoryApp as the SSO agent (with an automatic priority of 1) and the SchedulerApp as the SSO client. This associates these applications with this service domain and configures them in a 'circle of trust'.  Advance to the next page of the wizard to configure the services for this domain. For this example we will use the following services:  Authentication:   This will use the JWT (JSON Web Token) format authentication provider. The iOS application upon successful authentication will receive a signed JWT token from OAM Mobile & Social service. This token will be used in subsequent calls to OAM. Use 'MobileOAMAuthentication' here. Authorization:  The authorization provider. The SDK makes calls to this provider endpoint to obtain authorization decisions on resource requests. Use 'OAMAuthorization' here. User Profile Service:  This is the service that provides user profile services (attribute lookup, attribute modification). It can be any directory configured as a data source in OAM.  And that's it! We're done configuring our native apps. In the next section, let's look at some additional features that were mentioned in the earlier post that are automated by the SDK for the app developer i.e. these are areas that require no additional coding by the app developer when developing with the SDK as they only require server side configuration: Additional Configuration  Offline Authentication Select this option in the service domain configuration to allow users to log in and authenticate to the application locally. Clear the box to block users from authenticating locally. Strong Authentication By simply selecting the OAAMSecurityHandlerPlugin while configuring mobile related Service Domains, the OAM Mobile&Social service allows sophisticated device and client application registration logic as well as the advanced risk and fraud analysis logic found in OAAM to be applied to mobile authentication. Let's look at some scenarios where the OAAMSecurityHandlerPlugin gets used. First, when we configure OAM and OAAM to integrate together using the TAP scheme, then that integration kicks off by selecting the OAAMSecurityHandlerPlugin in the mobile service domain. This is how the mobile device is now prompted for KBA,OTP etc depending on the TAP scheme integration and the OAM users registered in the OAAM database. Second, when we configured the service domain, there were claim attributes there that are already pre-configured in OAM Mobile&Social service and we simply accepted the default values- these are the set of attributes that will be fetched from the device and passed to the server during registration/authentication as device profile attributes. When a mobile application requests a token through the Mobile Client SDK, the SDK logic will send the Device Profile attributes as a part of an HTTP request. This set of Device Profile attributes enhances security by creating an audit trail for devices that assists device identification. When the OAAM Security Plug-in is used, a particular combination of Device Profile attribute values is treated as a device finger print, known as the Digital Finger Print in the OAAM Administration Console. Each finger print is assigned a unique fingerprint number. Each OAAM session is associated with a finger print and the finger print makes it possible to log (and audit) the devices that are performing authentication and token acquisition. Finally, if the jail broken option is selected while configuring an application profile, the SDK detects a device is jail broken based on configured policy and if the OAAM handler is configured the plug-in can allow or block access to client device depending on the OAAM policy as well as detect blacklisted, lost or stolen devices and send a wipeout command that deletes all the mobile &social relevant data and blocks the device from future access. 1024x768 Social Logins Finally, let's complete this post by adding configuration to configure social logins for mobile applications. Although the Avitek sample apps do not demonstrate social logins this would be an ideal exercise for you based on the sample code provided in the earlier post. I'll cover the server side configuration here (with Facebook as an example) and you can retrofit the code to accommodate social logins by following the steps outlined in "Invoking Authentication Services" and add code in LoginViewController and maybe create a new delegate - AvitekRPDelegate based on the description in the previous post. So, here all you will need to do is configure an application profile for social login, configure a new service domain that uses the social login application profile, register the app on Facebook and finally configure the Facebook OAuth provider in OAM with those settings. Navigate to Mobile and Social, click on "Internet Identity Services" and create a new application profile. Here are the relevant parameters for the new application profile (-also we're not registering the social user in OAM with this configuration below, however that is a key feature as well): Name:  The application name. This must match the name of the of mobile application profile created for your application under Mobile Services. We used InventoryApp for this example. SharedSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM Mobile and Social service.  Mobile Application Return URL: After the Relying Party (social) login, the OAM Mobile & Social service will redirect to the iOS application using this URI. This is defined under Info->URL type and we used 'osa', so we define this here as 'osa://' Login Type: Choose to allow only internet identity authentication for this exercise. Authentication Service Endpoint : Make sure that /internetidentityauthentication is selected. Login to http://developers.facebook.com using your Facebook account and click on Apps and register the app as InventoryApp. Note that the consumer key and API secret gets generated automatically by the Facebook OAuth server. Navigate back to OAM and under Mobile and Social, click on "Internet Identity Services" and edit the Facebook OAuth Provider. Add the consumer key and API secret from the Facebook developers site to the Facebook OAuth Provider: Navigate to Mobile Services. Click on New to create a new service domain. In this example we call the domain "AvitekDomainRP". The type should be 'Mobile Application' and the application credential type 'User Token'. Add the application "InventoryApp" to the domain. Advance the next page of the wizard. Select the  default service profiles but ensure that the Authentication Service is set to 'InternetIdentityAuthentication'. Finish the creation of the service domain.

    Read the article

  • Sitemap structure for network of subdomains

    - by HaCos
    I am working on a project that's a network of 2 domains (domain.gr & domain.com.cy) of subdomains similar to Hubpages [each user gets a profile under a different subdomain & there is a personal blog for that user as well] and I think there is something wrong with our sitemap submission. Sometimes it takes weeks in order a new profiles to get indexed. We make use of one Webmasters account in order to manage all network and we don't want to create different accounts for each subdomain since there are more than 1000 already. According to this post http://goo.gl/KjMCjN, I end up on a structure of 5 sitemaps with the following structure : 1st sitemap will be for indexing the others. 2nd sitemap for all users profile under the domain.gr 3nd sitemap for all users profile under the domain.com.cy 4th sitemap for all posts under the *.domain.gr - news sitemap http://goo.gl/8A8S9f 5th sitemap for all posts under the *.domain.com.cy - news sitemap again Now my questions: Should we create news sitemaps or just list all post in 2nd & 3rd sitemap? Does links ordering has anything to do? Eg: Most recent user created be first in sitemap or doesn't make any different and we just need to make sure that lastmod date is correct? Does anyone guess how Hubpages submit their sitemap in Webmasters so maybe we could follow there way? Any alternative or better way to index this kind of schema? PS: Our network is multi language - Greek & English are available. We make use of hreflang tags on the head of each page to separate country target of each version.

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >