Search Results

Search found 18096 results on 724 pages for 'let me be'.

Page 47/724 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Unable to use NTFS partition for Dropbox in Linux

    - by Cristian
    Dropbox won't let me choose the sync folder inside a NTFS partition. First thought I had was mounting and its permissions (the Dropbox installer does let me choose my linux home as the Dropbox home). After searching and trying several other lines, the partition is mounted via fstab with these settings: /dev/sda5 /mnt/documents ntfs-3g uid=1000,gid=100,dmask=027,fmask=137 0 0 I can read and write in the partition, here is a ls output: 24 drwxr-x--- 1 tuxcayc users 24576 Sep 2 06:42 documents I'm using an Arch-based distro (Manjaro) and Dropbox installed via yaourt. I guess it's still some issue with mounting permissions. Any help is appreciated, Thanks.

    Read the article

  • Windows server's HDD Spin down daily/nightly - Does it makes sense?

    - by Riccardo
    A Windows Server 2003 R2 has the following hard disk configuration: - 3 internal hard disks attached to a 3Ware unit, configured in Raid 1 + spare unit - 3 external USB backup disks: 2 Verbatim 1TB (Samsung HD103SI) + 1 Western Digital 1TB (WD10EADS) The server runs 365 days per year, h24, however: - at daytime the server/user usage is limited to the internal hard disks - at nighttime there's no user usage, apart from scheduled maintenance tasks, basically the Server will be idle from 7PM to 8AM. apart from nighly backups (few hours). I was wondering if: (a) it makes any sense let Windows manage power savings, allowing disks to spin down accordingly, ** OR** let the disks stay awlays-on, to avoid permature wearing, due to continuous spin up/down (b) leave internal disks always on, and force external disks to power down while idle (this requires third party tools, such as Verbatim's Green button utility) Your thoughts?

    Read the article

  • parameters in a seo url

    - by Marius
    This should be a very simple question for seo experts. Let's say we have the following URLs: http://www.test.com/some-sort-of-page http://www.test.com/some-sort-of-page?pgid1189 http://www.test.com/some-sort-of-page/page/1189 http://www.test.com/page/1189/some-sort-of-page The first one is an ideal solution. What i need to do is to somehow pass a resource identifier in the url to know exactly what this url is pointing to, since it can be pointing to a lot of different things. In the second URL, "pgid" specifies that the resource is a "page". URLs 3 and 4 specify the same thing differently. I do not care if the URL is friendly to people, because, let's face it - 99.9% of people will never ever ever bother to remember such url no matter how "friendly" it is. So the question is: which of the last 3 URLs would be the best solution for search engines? My guess is it would be the 2nd with query string, but i might be wrong. Thanks for your thoughts P.S. please don't offer using the first url. There's no problem using it, but the question is not about that.

    Read the article

  • Record demo and save as AVI for upload to YouTube?

    - by OverTheRainbow
    Hello I need to record the demo of a program in Windows, and save this into an AVI file so that I can upload it to YouTube. I tried Wink for this, but unless I overlooked it, it saves files as Flash (FLV), which YouTube refused. Is there an open-source alternative? I don't need something hardcore, just a tool that will let me save a demo, and insert a couple of slides where the demo stops to let the user read stuff and click on a button to resume watching. Thank you.

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • Home-Router: Access internal server using external ip [migrated]

    - by user15863
    If I've got a typical home router -- say a Net Gear -- which has certain ports forwarded to a internal server, is there a way to tweak the router to let me access that internal server using the external IP address from within the same network? Is there a non-enterprise grade router that can handle this type of thing? In case that was strangely worded, let me re-phrase with an example. My external IP is 1.2.3.4. My internal server is 10.4.3.100 Port 1178 is being forwarded from the router to 10.4.3.100. I'd like to be able to be able to hit 10.4.3.100 from an internal ip of 10.4.3.10 by using the external ip of 1.2.3.4. Possible?

    Read the article

  • Odd behavior on Shift-{Esc,Fx}

    - by ??????? ???????????
    Sometimes, when changing between the modes in Vim, I forget to take my finger off the Shift key. This innocent mistake is probably part of the luggage carried over from other terminals, but I have never seen my input treated this way. After changing from command mode to input mode, if I hit the Esc key while the Shift key is down, Vim will display <9b (Control Sequence Introducer) instead of switching to the command mode. At least two work-arounds to this intended behavior are available on the mintty site (faq, issue). " Avoiding escape timeout issues in vim :let &t_ti.="\e[?7727h" :let &t_te.="\e[?7727l" :noremap <EscO[ <Esc :noremap! <EscO[ <Esc " Remap escape :imap <special <CSI <ESC My question is about the syntax and the meaning of the first solution. From the looks of it, it seems like t_ti is being assigned a literal value, but I'm not sure why the "c address-of" operator is required. I'm also not sure why there are two noremap statements.

    Read the article

  • My (C:) drive changed from Basic to Dynamic, is this bad?

    - by bbman225
    I'm really worried here. My computer still runs, so I take this as a good sign, however let me explain my situation: I am trying to install Ubuntu Linux, and the installer was having problems, so I went back into the partitioning tool on my Windows 7 (after having successfully shrunken my C drive and created 55 GB unallocated space) and I attempted to create a new partition out of the 55 GB and make it a simple NTFS drive so that I could let the installer wipe it clean again and format it in whatever file system it prefers. Now, after googling it and running through the process I noticed that all of my drives, including the C drive and the one I just made changed from type "Basic" to type "Dynamic." What is a dynamic drive and should I be worried?

    Read the article

  • Distinct operator in Linq

    - by Jalpesh P. Vadgama
    Linq operator provides great flexibility and easy way of coding. Let’s again take one more example of distinct operator. As name suggest it will find the distinct elements from IEnumerable. Let’s take an example of array in console application and then we will again print array to see is it working or not. Below is the code for that. In this application I have integer array which contains duplicate elements and then I will apply distinct operator to this and then I will print again result of distinct operators to actually see whether its working or not. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Experiment { class Program { static void Main(string[] args) { int[] intArray = { 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5 }; var uniqueIntegers = intArray.Distinct(); foreach (var uInteger in uniqueIntegers) { Console.WriteLine(uInteger); } Console.ReadKey(); } } } Below is output as expected.. That’s cool..Stay tuned for more.. Happy programming. Technorati Tags: Linq,Distinct

    Read the article

  • Camunda BPM 7.0 on WebLogic 12c

    - by JuergenKress
    If we go on tour together with Oracle I think we have to have camunda BPM running on the Oracle WebLogic application server 12c (WLS in short). And one of our enterprise customers asked - so I invested a Sunday and got it running (okay - to be honest - I needed quite some help from our Java EE server guru Christian). In this blog post I give a step by step description how run camunda BPM on WLS. Please note that this is not an official distribution (which would include a sophisticated QA, a comprehensive documentation and a proper distribution) - it was my personal hobby. And I did not fire the whole test suite agains WLS - so there might be some issues. We will do the real productization as soon as we have a customer for it (let us know if this is interesting for you). Necessary steps After installing and starting up WLS (I used the zip distribution of WLS 12c by the way) you have to do: Add a datasource Add shared libraries Add a resource adapter (for the Job Executor using a proper WorkManager from WLS) Add an EAR starting up one process engine Add a WAR file containing the REST API Add other WAR files (e.g. cockpit) and your own process applications Actually that sounds more work to do than it is ;-) So let's get started: Add a datasource Add a datasource via the Administration Console (or any other convenient way on WLS - I should admit that personally I am not the WLS expert). Make sure that you target it on your server - this is not done by default: Read the full article here. For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Camunda,BPM,JavaEE7,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • Using the OAM Mobile & Social SDK to secure native mobile apps - Part 2 : OAM Mobile & Social Server configuration

    - by kanishkmahajan
    Objective  In the second part of this blog post I'll now cover configuration of OAM to secure our sample native apps developed using the iOS SDK. First, here are some key server side concepts: Application Profiles: An application profile is a logical representation of your application within OAM server. It could be a web (html/javascript) or native (iOS or Android) application. Applications may have different requirements for AuthN/AuthZ, and therefore each application that interacts with OAM Mobile & Social REST services must be uniquely defined. Service Providers: Service providers represent the back end services that are accessed by applications. With OAM Mobile & Social these services are in the areas of authentication, authorization and user profile access. A Service Provider then defines a type or class of service for authentication, authorization or user profiles. For example, the JWTAuthentication provider performs authentication and returns JWT (JSON Web Tokens) to the application. In contrast, the OAMAuthentication also provides authentication but uses OAM SSO tokens Service Profiles:  A Service Profile is a logical envelope that defines a service endpoint URL for a service provider for the OAM Mobile & Social Service. You can create multiple service profiles for a service provider to define token capabilities and service endpoints. Each service provider instance requires atleast one corresponding service profile.The  OAM Mobile & Social Service includes a pre-configured service profile for each pre-configured service provider. Service Domains: Service domains bind together application profiles and service profiles with an optional security handler. So now let's configure the OAM server. Additional details are in the OAM Documentation and this post simply provides an outline of configuration tasks required to configure OAM for securing native apps.  Configuration  Create The Application Profile Log on to the Oracle Access Management console and from System Configuration -> Mobile and Social -> Mobile Services, select "Create" under Application Profiles. You would do this  step twice - once for each of the native apps - AvitekInventory and AvitekScheduler. Enter the parameters for the new Application profile: Name:  The application name. In this example we use 'InventoryApp' for the AvitekInventory app and 'SchedulerApp' for the AvitekScheduler app. The application name configured here must match the application name in the settings for the deployed iOS application. BaseSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM server.  Mobile Configuration: Enable this checkbox for any mobile applications. This enables the SDK to collect and send Mobile specific attributes to the OAM server.  Webview: Controls the type of browser that the iOS application will use. The embedded browser (default) will render the browser within the application. External will use the system standalone browser. External can sometimes be preferable for debugging URLScheme: The URL scheme associated with the iOS apps that is also used as a custom URL scheme to register O/S handlers that will take control when OAM transfers control to device. For the AvitekInventory and the AvitekScheduler apps I used osa:// and client:// respectively. You set this scheme in Xcode while developing your iOS Apps under Info->URL Types.  Bundle Identifier : The fully qualified name of your iOS application. You typically set this when you create a new Xcode project or under General->Identity in Xcode. For the AvitekInventory and AvitekScheduler apps these were com.us.oracle.AvitekInventory and com.us.oracle.AvitekScheduler respectively.  Create The Service Domain Select create under Service domains. Create a name for your domain (AvitekDomain is what I've used). The name configured must match the service domain set in the iOS application settings. Under "Application Profile Selection" click the browse button. Choose the application profiles that you created in the previous step one by one. Set the InventoryApp as the SSO agent (with an automatic priority of 1) and the SchedulerApp as the SSO client. This associates these applications with this service domain and configures them in a 'circle of trust'.  Advance to the next page of the wizard to configure the services for this domain. For this example we will use the following services:  Authentication:   This will use the JWT (JSON Web Token) format authentication provider. The iOS application upon successful authentication will receive a signed JWT token from OAM Mobile & Social service. This token will be used in subsequent calls to OAM. Use 'MobileOAMAuthentication' here. Authorization:  The authorization provider. The SDK makes calls to this provider endpoint to obtain authorization decisions on resource requests. Use 'OAMAuthorization' here. User Profile Service:  This is the service that provides user profile services (attribute lookup, attribute modification). It can be any directory configured as a data source in OAM.  And that's it! We're done configuring our native apps. In the next section, let's look at some additional features that were mentioned in the earlier post that are automated by the SDK for the app developer i.e. these are areas that require no additional coding by the app developer when developing with the SDK as they only require server side configuration: Additional Configuration  Offline Authentication Select this option in the service domain configuration to allow users to log in and authenticate to the application locally. Clear the box to block users from authenticating locally. Strong Authentication By simply selecting the OAAMSecurityHandlerPlugin while configuring mobile related Service Domains, the OAM Mobile&Social service allows sophisticated device and client application registration logic as well as the advanced risk and fraud analysis logic found in OAAM to be applied to mobile authentication. Let's look at some scenarios where the OAAMSecurityHandlerPlugin gets used. First, when we configure OAM and OAAM to integrate together using the TAP scheme, then that integration kicks off by selecting the OAAMSecurityHandlerPlugin in the mobile service domain. This is how the mobile device is now prompted for KBA,OTP etc depending on the TAP scheme integration and the OAM users registered in the OAAM database. Second, when we configured the service domain, there were claim attributes there that are already pre-configured in OAM Mobile&Social service and we simply accepted the default values- these are the set of attributes that will be fetched from the device and passed to the server during registration/authentication as device profile attributes. When a mobile application requests a token through the Mobile Client SDK, the SDK logic will send the Device Profile attributes as a part of an HTTP request. This set of Device Profile attributes enhances security by creating an audit trail for devices that assists device identification. When the OAAM Security Plug-in is used, a particular combination of Device Profile attribute values is treated as a device finger print, known as the Digital Finger Print in the OAAM Administration Console. Each finger print is assigned a unique fingerprint number. Each OAAM session is associated with a finger print and the finger print makes it possible to log (and audit) the devices that are performing authentication and token acquisition. Finally, if the jail broken option is selected while configuring an application profile, the SDK detects a device is jail broken based on configured policy and if the OAAM handler is configured the plug-in can allow or block access to client device depending on the OAAM policy as well as detect blacklisted, lost or stolen devices and send a wipeout command that deletes all the mobile &social relevant data and blocks the device from future access. 1024x768 Social Logins Finally, let's complete this post by adding configuration to configure social logins for mobile applications. Although the Avitek sample apps do not demonstrate social logins this would be an ideal exercise for you based on the sample code provided in the earlier post. I'll cover the server side configuration here (with Facebook as an example) and you can retrofit the code to accommodate social logins by following the steps outlined in "Invoking Authentication Services" and add code in LoginViewController and maybe create a new delegate - AvitekRPDelegate based on the description in the previous post. So, here all you will need to do is configure an application profile for social login, configure a new service domain that uses the social login application profile, register the app on Facebook and finally configure the Facebook OAuth provider in OAM with those settings. Navigate to Mobile and Social, click on "Internet Identity Services" and create a new application profile. Here are the relevant parameters for the new application profile (-also we're not registering the social user in OAM with this configuration below, however that is a key feature as well): Name:  The application name. This must match the name of the of mobile application profile created for your application under Mobile Services. We used InventoryApp for this example. SharedSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM Mobile and Social service.  Mobile Application Return URL: After the Relying Party (social) login, the OAM Mobile & Social service will redirect to the iOS application using this URI. This is defined under Info->URL type and we used 'osa', so we define this here as 'osa://' Login Type: Choose to allow only internet identity authentication for this exercise. Authentication Service Endpoint : Make sure that /internetidentityauthentication is selected. Login to http://developers.facebook.com using your Facebook account and click on Apps and register the app as InventoryApp. Note that the consumer key and API secret gets generated automatically by the Facebook OAuth server. Navigate back to OAM and under Mobile and Social, click on "Internet Identity Services" and edit the Facebook OAuth Provider. Add the consumer key and API secret from the Facebook developers site to the Facebook OAuth Provider: Navigate to Mobile Services. Click on New to create a new service domain. In this example we call the domain "AvitekDomainRP". The type should be 'Mobile Application' and the application credential type 'User Token'. Add the application "InventoryApp" to the domain. Advance the next page of the wizard. Select the  default service profiles but ensure that the Authentication Service is set to 'InternetIdentityAuthentication'. Finish the creation of the service domain.

    Read the article

  • Creating a USB stick for installing centos 6.x using DVD1 and DVD2 iso files

    - by user250563
    First, we create 2 partitions on the USB stick that is let's say 16GB. first partition is let's say only 1GB and the second partition is the rest of what is available. after we "w" write the changes, the USB now has 2 partitions. 1 is 1GB 1 is more than 14GB so , we have... sdb1 and sdb2 now. now we need to turn these partitions into filesystems some say i should run these commands after those procedures. mkfs.vfat -F 32 /dev/sdb1 mkfs.ext3 /dev/sdb2 but some web pages recommend using: mkfs.vfat -n BOOT /dev/sdb1 mkfs.ext2 -m 0 -b 4096 -L DATA /dev/sdb2 which is it? so let's say the DVDs are called: CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso so we make a directory: mkdir -p /mnt/dvd1 and then mount it: mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt/dvd1 and i suppose we don't make a directory for dvd2 and we don't have to mount it ? at this point i do not know what should be done. but i think this step might be next: we make the USB bootable by finding the file named mbr.bin and then moving it to there via these commnad. dd conv=notrunc bs=440 count=1 if=/usr/lib/syslinux/mbr.bin of=/dev/sdb parted /dev/sdb set 1 boot on in other words we are "dd-ing it to 'sdb' not sdb1' or 'sdb2'. and then we use parted to set the boot to on for sdb so far everything looks good? here is the confusing parts.. how exactly do i move these iso files to the usb drive? EVERYTHING BELOW IS A GUESS. so at this point i should copy the folder /mnt/dvd1/isolinux to usb's sdb1 or sdb2 ? rename it to syslinux ? and then inside this syslinux folder there will be a file called... isolinux.cfg ? which should be renamed to syslinux.cfg ? and then copy the contents of /mnt/dvd1/images/* to USB's sdb2 ? but i think i am also suppose to copy and paste the both CentOS-6.4-x86_64-bin-DVD1.iso CentOS-6.4-x86_64-bin-DVD2.iso somewhere into this USB's sdb2 partition, correct ? almost like a drag and drop kind of a thing? or do they go into any folders ? centos' own web site has some instructions but those instructions do not work. http://wiki.centos.org/HowTos/InstallFromUSBkey i once got this working but things got ruined, i have to do it again and this time take notes.

    Read the article

  • Why old (301) links stay on Google when breaking site down to multiple domains

    - by Sampo Sarrala
    Some background: We did have single site and single domain (let's call it mainsite.com) with product information, however things have changed since and product database has grown fast. So we decided to move some major products/manufacturers under their own domains (let's call one of them subsite.com) while still using our main database/codebase. What we've done: Added subsite.com domain for product 1 by Great Products Co. Some new nice looking front pages, info pages, etc. Detail pages that will use information from original db. Redirected product/group links from mainsite.com using 301 redirect. Verified that redirects works as expected. Waited some time for Google reindexing (over 30 days, I've heard it should be more than enough). Results: If I search our moved products from Google then it will found them and list them but with old links to our main page like mainsite.com/group/product1 but it should show link to new site subsite.com/product1. Links from Goole redirects as they should, as said redirects are verified [301]. Main question: Any reasons why Google would not follow 301 redirects and update links so that they will point to our new mfg/product site subsite.com?

    Read the article

  • What is causing my newly built PC with Windows 7 to BSOD?

    - by Ben S
    I recently built my own PC from parts and installed Windows 7 and I have been getting BSODs with various different stop codes. The latest was 0x24, but I've also had 0xd1 and 0x1e. However, Windows does not let me know where the fault occurred, so I have no idea how to go about resolving this. I know the cause is likely a hardware driver, but I don't know which and cannot find information about how to use the minidumps to troubleshoot my problem. I've uploaded the last three minidumps in case someone can make sense out of them and let me know what could be causing my BSODs. Thanks, Ben

    Read the article

  • If You Include the Groovy Editor...

    - by Geertjan
    ...in a NetBeans RCP application, what additional JARs will you need to include for the Groovy Editor to work? Leaving aside the debate on the current state & quality of the NetBeans Groovy Editor, so, assuming you need the Groovy support that the NetBeans Groovy Editor provides, you would check the Groovy Editor checkbox in the Project Properties dialog of your application: As you can see, however, the Groovy Editor depends on other modules, some of which, in turn, depend on yet other modules, and so on. So, I clicked the "Resolve" button above and then created a ZIP distribution, to see which additional JARs had been included. Until that point, I had only been using the "platform" cluster, which means that absolutely everything found in the ZIP's "ide" cluster and "java" cluster have only been included so that the Groovy Editor could be included, i.e., all thanks to clicking the "Resolve" button above. Let's first look at what that means for the "java" cluster: That's not so bad and kind of a side effect of Groovy being Java, i.e., a lot of Java functionality is needed. Now let's look at the "ide" cluster: So, in answer to the original question, if all you want in your NetBeans Platform application, in terms of editor functionality, is the Groovy Editor, then you have a pretty high price to pay. At the very least, I would have assumed that the project support JARs and the debugger support JARs would not be so tightly coupled with the Groovy Editor. That would be a cool thing to separate out from the editor support.

    Read the article

  • Cannot set "Language for Non-Unicode Programs" in Regional and Language Settings

    - by cornjuliox
    I'm trying to set the Language for Non-Unicode Programs from English to Japanese (I'm using Windows XP SP 3), but it won't let me. It looks like I've got the East Asian Language packs installed, but when I select "Japanese" from the drop-down box and hit "Apply" I get an error that says "Setup was unable to install the chosen locale. Please contact your system administrator". I'm already logged in as the administrator, and I've restarted several times but it still won't let me. Can anyone give me an idea as to how to solve this problem? Reinstalling Windows is absolutely out of the question.

    Read the article

  • How can I mitigate DNS Server outages?

    - by Eric Belair
    Let's say I have a root domain of "mysite.com". That domain and its sub-domains have DNS served by an external service - let's call them Setwork Nolutions. If this external company is hit with a DDoS attack, my interally-hosted websites under this domain are no longer accessible at "mysite.com" or "*.mysite.com", even though the website(s) is/are fully up and operational. How can I mitigate such a problem so as to keep end users happy? The only solution others at my company have come up with is to create a second domain - i.e. "mysite2.com", and host its DNS at another company, and then communicate to all end users that this is the website they should use. I think this is ridiculous, and just leads to a bunch of other problems. I'd like to find a solution where we can point to the same website with the same URL without the original DNS host being operational. Any thoughts?

    Read the article

  • Inheritance, commands and event sourcing

    - by Arthis
    In order not to redo things several times I wanted to factorize common stuff. For Instance, let's say we have a cow and a horse. The cow produces milk, the horse runs fast, but both eat grass. public class Herbivorous { public void EatGrass(int quantity) { var evt= Build.GrassEaten .WithQuantity(quantity); RaiseEvent(evt); } } public class Horse : Herbivorous { public void RunFast() { var evt= Build.FastRun; RaiseEvent(evt); } } public class Cow: Herbivorous { public void ProduceMilk() { var evt= Build.MilkProduced; RaiseEvent(evt); } } To eat Grass, the command handler should be : public class EatGrassHandler : CommandHandler<EatGrass> { public override CommandValidation Execute(EatGrass cmd) { Contract.Requires<ArgumentNullException>(cmd != null); var herbivorous= EventRepository.GetById<Herbivorous>(cmd.Id); if (herbivorous.IsNull()) throw new AggregateRootInstanceNotFoundException(); herbivorous.EatGrass(cmd.Quantity); EventRepository.Save(herbivorous, cmd.CommitId); } } so far so good. I get a Herbivorous object , I have access to its EatGrass function, whether it is a horse or a cow doesn't matter really. The only problem is here : EventRepository.GetById<Herbivorous>(cmd.Id) Indeed, let's imagine we have a cow that has produced milk during the morning and now wants to eat grass. The EventRepository contains an event MilkProduced, and then come the command EatGrass. With the CommandHandler, we are no longer in the presence of a cow and the herbivorious doesn't know anything about producing milk . what should it do? Ignore the event and continue , thus allowing the inheritance and "general" commands? or throw an exception to forbid execution, it would mean only CowEatGrass, and HorseEatGrass might exists as commands ? Thanks for your help, I am just beginning with these kinds of problem, and I would be glad to have some news from someone more experienced.

    Read the article

  • How to use BT or emule across 2 or more hard drives?

    - by the searcher
    One difficulty with BT or emule is that, when the hard drive is full, we constantly need to move older files to a new hard drive so that we can download newer files. We can change BT or emule's setting so that the folder for downloading points to the new hard drive, but then, what if emule haven't finished downloading for some files that are hard to find, and it is 92% done... in that case, we would like to keep the old setting so that when the last 8% arrives, it can go into the correct file. (and same for BT, if we haven't finished some file or if we want to seed something later). So is there a good way to let BT or emule point to 2 hard drives, or somehow let the new hard drive "merge" into the existing hard drive / folder?

    Read the article

  • Leveraging Code in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • Activist shared printing material gallery

    - by Dave
    What would you say would be the best way to do this: We would like to create a section on our activist community FB page and website in order to share with everyone images and files ready for printing panflets, brochures, t-shirts, stickers, etc. Let's say we have some cool slogans for t-shirts, so we would like to show them on a gallery, and offer for download the original design files needed for a print shop to create the t-shirts. And the same thing for all other kinds of media. We want to enable anyone to be able to just download the files for free, and easily create printed materials with them. But besides offering this hybrid between picture gallery and downloads manager, we would also like to make it very easy for anyone to upload and share their own files with the community, to make it a true collaboration initiative, be it that they get posted automatically, or that we first review and approve all uploads. Cafepress or Spreadshirt let you upload your design and sell your own merchandise. We need something similar, but where people can then download working files for making quality printings and materials. What apps, tools, services or methods are out there with which you think this could be best done?? We have some ideas, but we would like to hear some more!!

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >