Search Results

Search found 20640 results on 826 pages for 'key combination'.

Page 595/826 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • Setting AcceptButton to None still closes the form on ENTER.

    - by Vilx-
    I have a MDI windows forms application and my child forms mostly have "OK" and "Cancel" buttons. However I do not want them to be activated with ENTER/ESC keys to prevent accidental saves/aborts. So, the form has both AcceptButton and CancelButton set to none. The ESC button indeed does nothing, but the ENTER button still closes the form by "clicking" on the first button found, sorted by TabOrder. Why is this so? Must I really start doing workarounds and catching the ENTER key?

    Read the article

  • How to read a CLOB column in Oracle using OleDb ?

    - by T.Falise
    Hi, I have created a table on an Oracle 10g database with this structure : create table myTable ( id number(32,0) primary key, myData clob ) I can insert rows in the table without any problem, but when I try to read data from the table using OleDb connection, I get an exception. Here is the code I use : using (OleDbConnection dbConnection = new OleDbConnection("ConnectionString")) { dbConnection.Open(); OleDbCommand dbCommand = dbConnection.CreateCommand(); dbCommand.CommandText = "SELECT * FROM myTable WHERE id=?"; dbCommand.Parameters.AddWithValue("ID", id); OleDbDataReader dbReader = dbCommand.ExecuteReader(); } The exception details seems to point on an unsupported data type : System.Data.OleDb.OleDbException: Unspecified error Oracle error occurred, but error message could not be retrieved from Oracle. Data type is not supported. Does anyone know how I can read this data using the OleDb connection ? PS : The driver used in this case is the Microsoft one.

    Read the article

  • Why doesn't Visual Studio show an exception message when my exception occurs in a static constructor

    - by Tim Goodman
    I'm running this C# code in Visual Studio in debug mode: public class MyHandlerFactory : IHttpHandlerFactory { private static Dictionary<string, bool> myDictionary = new Dictionary<string, bool>(); static MyHandlerFactory() { myDictionary.Add("someKey",true); myDictionary.Add("someKey",true); // fails due to duplicate key } } Outside of the static constructor, when I get to the line with the error Visual Studio highlights it and pops up a message about the exception. But in the static constructor I get no such message. I am stepping through line-by-line, so I know that I'm getting to that line and no further. Why is this? (I have no idea if that fact that my class implements IHttpHandlerFactory matters, but I included it just in case.) This is VS2005, .Net 2.0

    Read the article

  • Upgrade URL for SEO from mysite.com/dbtable_id/ to mysite.com/dbtable_id/article-title

    - by John
    I have an existing journal website with the following url structure http://mysite.com/dbtable_id/ (eg. http://mysite.com/89348/) where 89348 is the primary key id of the journal article. I want to add the title of the article to the url for SEO purposes like http://mysite.com/dbtable_id/article-title (eg. http://mysite.com/89348/hello-world) I like this approach because I don't need to change the PHP code since it will still look up the article by dbtable_id. All I have to do is append url friendly titles to relevant links in template files and add one more rule to a .htaccess file. Is there anything I should be concerned about? Am I following best practices? Will the possibility for mismatch between "dbtable_id" and "article-title" affect SEO?

    Read the article

  • Piano Keys using GridLayout (or Something Else)

    - by yar
    I am creating a container of JComponents which will look like a piano keyboard. The black keys look like this (Groovy) def setBlackNotes(buttons) { def octaves = (int)(buttons.size() / 5) def gridLayout = new GridLayout(1, octaves*7); def blackNotePanel = new JPanel(gridLayout) this.add blackNotePanel def i = 0 octaves.times { 2.times { blackNotePanel.add buttons[i++] } blackNotePanel.add Box.createHorizontalBox() 3.times { blackNotePanel.add buttons[i++] } blackNotePanel.add Box.createHorizontalBox() } } Which is just what I need, and looks like this: but then I'd like to move this over to the right by half-a-key width. All of my attempts to move the blackNotePanel over by an arbitrary width -- wrapping it a BorderLayout, a MigLayout, etc. -- have failed or changed the spacing of the GridLayout radically. Any suggestions on how to move this over to the right by an arbitrary amount in pixels?

    Read the article

  • JKS, CER, SignCode issue

    - by Serge
    Currently, I've got an intresting problem about signing an EXE file using the SignCode tool from Microsoft and a certificate (from GlobalSign)... So, we've bought a new certificate as the current one will expire in short time. The original format is JKS. I exported this certificate from .JKS to .CER so I can install on local machine in the Root Trusted Certificates section. I've installed it and if I open the certmgr.msc I can see it, but if I open the Control Panel - Internet Options - Content - Certificates - Root Trusted Certificates etc then I can't see it... I thought it should be here as well. When I run the signcode.exe tool I get the "unable to open a csp provider with the correct private key" error message. Note! The signcode.exe command is correct, because it works if I test with the old certificate. Please advise. Thank you in advance! Serge

    Read the article

  • Help converting some classic asp code to c# (.net 3.5)

    - by xoail
    I have this block of code in a .asp file that I am struggling to convert to c#... can anyone help me? Function EncodeCPT(ByVal sPinCode, ByVal iOfferCode, ByVal sShortKey, ByVal sLongKey) Dim vob(2), encodeModulo(256), decodeX, ocode decodeX = " abcdefghijklmnopqrstuvwxyz0123456789!$%()*+,-.@;<=>?[]^_{|}~" if len(iOfferCode) = 5 then ocode = iOfferCode Mod 10000 else ocode = iOfferCode end if vob(1) = ocode Mod 100 vob(0) = Int((ocode-vob(1)) / 100) For i = 1 To 256 encodeModulo(i) = 0 Next For i = 0 To 60 encodeModulo(asc(mid(decodeX, i + 1, 1))) = i Next 'append offer code to key sPinCode = lcase(sPinCode) & iOfferCode If Len(sPinCode) < 20 Then sPinCode = Left(sPinCode & " couponsincproduction", 20) End If 'encode Dim i, q, j, k, sCPT, s1, s2, s3 i = 0 q = 0 j = Len(sPinCode) k = Len(sShortKey) sCPT = "" For i = 1 To j s1 = encodeModulo(asc( mid(sPinCode, i, 1)) ) s2 = 2 * encodeModulo( asc( mid(sShortKey, 1 + ((i - 1) Mod k), 1) ) ) s3 = vob(i Mod 2) q = (q + s1 + s2 + s3) Mod 61 sCPT = sCPT & mid(sLongKey, q + 1, 1) Next EncodeCPT = sCPT End Function

    Read the article

  • Drupal: Create custom search

    - by Dr. Hfuhruhurr
    I'm trying to create a custom search but getting stuck. What I want is to have a dropdownbox so the user can choose where to search in. These options can mean 1 or more content types. So if he chooses options A, then the search will look in node-type P,Q,R. But he may not give those results, but only the uid's which will be then themed to gather specific data for that user. To make it a little bit clearer, Suppose I want to llok for people. The what I'm searching in is 2 content profile types. But ofcourse you dont want to display those as a result, but a nice picture of the user and some data. I started with creating a form with a textfield and the dropdown box. Then, in the submit handler, i created the keys and redirected to another pages with those keys as a tail. This page has been defined in the menu hook, just like how search does it. After that I want to call hook_view to do the actual search by calling node_search, and give back the results. Unfortunately, it goes wrong. When i click the Search button, it gives me a 404. But am I on the right track? Is this the way to create a custom search? Thx for your help. Here's the code for some clarity: <?php // $Id$ /* * @file * Searches on Project, Person, Portfolio or Group. */ /** * returns an array of menu items * @return array of menu items */ function vm_search_menu() { $subjects = _vm_search_get_subjects(); foreach ($subjects as $name => $description) { $items['zoek/'. $name .'/%menu_tail'] = array( 'page callback' => 'vm_search_view', 'page arguments' => array($name), 'type' => MENU_LOCAL_TASK, ); } return $items; } /** * create a block to put the form into. * @param $op * @param $delta * @param $edit * @return mixed */ function vm_search_block($op = 'list', $delta = 0, $edit = array()) { switch ($op) { case 'list': $blocks[0]['info'] = t('Algemene zoek'); return $blocks; case 'view': if (0 == $delta) { $block['subject'] = t(''); $block['content'] = drupal_get_form('vm_search_general_form'); } return $block; } } /** * Define the form. */ function vm_search_general_form() { $subjects = _vm_search_get_subjects(); foreach ($subjects as $key => $subject) { $options[$key] = $subject['desc']; } $form['subjects'] = array( '#type' => 'select', '#options' => $options, '#required' => TRUE, ); $form['keys'] = array( '#type' => 'textfield', '#required' => TRUE, ); $form['submit'] = array( '#type' => 'submit', '#value' => t('Zoek'), ); return $form; } function vm_search_general_form_submit($form, &$form_state) { $subjects = _vm_search_get_subjects(); $keys = $form_state['values']['keys']; //the search keys //the content types to search in $keys .= ' type:' . implode(',', $subjects[$form_state['values']['subjects']]['types']); //redirect to the page, where vm_search_view will handle the actual search $form_state['redirect'] = 'zoek/'. $form_state['values']['subjects'] .'/'. $keys; } /** * Menu callback; presents the search results. */ function vm_search_view($type = 'node') { // Search form submits with POST but redirects to GET. This way we can keep // the search query URL clean as a whistle: // search/type/keyword+keyword if (!isset($_POST['form_id'])) { if ($type == '') { // Note: search/node can not be a default tab because it would take on the // path of its parent (search). It would prevent remembering keywords when // switching tabs. This is why we drupal_goto to it from the parent instead. drupal_goto($front_page); } $keys = search_get_keys(); // Only perform search if there is non-whitespace search term: $results = ''; if (trim($keys)) { // Log the search keys: watchdog('vm_search', '%keys (@type).', array('%keys' => $keys, '@type' => $type)); // Collect the search results: $results = node_search('search', $type); if ($results) { $results = theme('box', t('Zoek resultaten'), $results); } else { $results = theme('box', t('Je zoek heeft geen resultaten opgeleverd.')); } } } return $results; } /** * returns array where to look for * @return array */ function _vm_search_get_subjects() { $subjects['opdracht'] = array('desc' => t('Opdracht'), 'types' => array('project') ); $subjects['persoon'] = array('desc' => t('Persoon'), 'types' => array('types_specialisatie', 'smaak_en_interesses') ); $subjects['groep'] = array('desc' => t('Groep'), 'types' => array('Villamedia_groep') ); $subjects['portfolio'] = array('desc' => t('Portfolio'), 'types' => array('artikel') ); return $subjects; }

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • How to upload video on YouTube with Ruby

    - by viatropos
    I am trying to upload a youtube video using the GData gem (I have seen the youtube_g gem but would like to make it work with pure GData if possible), but I keep getting this error: GData::Client::BadRequestError in 'MyProject::Google::YouTube should upload the actual video to youtube (once it does, mock this test out)' request error 400: No file found in upload request. I am using this code: def metadata data = <<-EOF <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> EOF end @yt = GData::Client::YouTube.new @yt.clientlogin("name", "pass") @yt.developer_key = "myKey" url = "http://uploads.gdata.youtube.com/feeds/api/users/name/uploads" mime_type = "multipart/related" file_path = "sample_upload.mp4" @yt.post_file(url, file_path, mime_type, metadata) What is the recommended/standard way for uploading videos to youtube with ruby, what is your method? Update After applying the changes to wrapped_entry, the string it produces looks like this: --END_OF_PART_59003 Content-Type: application/atom+xml; charset=UTF-8 <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> --END_OF_PART_59003 Content-Type: multipart/related Content-Transfer-Encoding: binary ... and inspecting the request and response looks like this: Request: <GData::HTTP::Request:0x1b8bb44 @method=:post @url="http://uploads.gdata.youtube.com/feeds/api/users/lancejpollard/uploads" @body=#<GData::HTTP::MimeBody:0x1b8c738 @parts=[#<GData::HTTP::MimeBodyString:0x1b8c058 @bytes_read=0 @string="--END_OF_PART_30909\r\nContent-Type: application/atom+xml; charset=UTF-8\r\n\r\n <?xml version=\"1.0\"?>\n<entry xmlns=\"http://www.w3.org/2005/Atom\"\n xmlns:media=\"http://search.yahoo.com/mrss/\"\n xmlns:yt=\"http://gdata.youtube.com/schemas/2007\">\n <media:group>\n <media:title type=\"plain\">Bad Wedding Toast</media:title>\n <media:description type=\"plain\">\n I gave a bad toast at my friend's wedding.\n </media:description>\n <media:category scheme=\"http://gdata.youtube.com/schemas/2007/categories.cat\">People</media:category>\n <media:keywords>toast wedding</media:keywords>\n </media:group>\n</entry> \n\r\n--END_OF_PART_30909\r\nContent-Type: multipart/related\r\nContent-Transfer-Encoding: binary\r\n\r\n"> #<File:/Users/Lance/Documents/Development/git/thing/spec/fixtures/sample_upload.mp4> #<GData::HTTP::MimeBodyString:0x1b8c044 @bytes_read=0 @string="\r\n--END_OF_PART_30909--"] @current_part=0 @boundary="END_OF_PART_30909" @headers={"Slug"="sample_upload.mp4" "User-Agent"="GoogleDataRubyUtil-AnonymousApp" "GData-Version"="2" "X-GData-Key"="key=AI39si7jkhs_ECjF4unOQz8gpWGSKXgq0KJpm8wywkvBSw4s8oJd5p5vkpvURHBNh-hiYJtoKwQqSfot7KoCkeCE32rNcZqMxA" "Content-Type"="multipart/related; boundary=\"END_OF_PART_30909\"" "MIME-Version"="1.0"} Response: #<GData::HTTP::Response:0x1b897e0 @body="No file found in upload request." @headers={"cache-control"=>"no-cache no-store must-revalidate" "connection"=>"close" "expires"=>"Fri 01 Jan 1990 00:00:00 GMT" "content-type"=>"text/plain; charset=utf-8" "date"=>"Fri 11 Dec 2009 02:10:25 GMT" "server"=>"Upload Server Built on Nov 30 2009 13:21:18 (1259616078)" "x-xss-protection"=>"0" "content-length"=>"32" "pragma"=>"no-cache"} @status_code=400> Still not working, I'll have to check it out more with those changes.

    Read the article

  • Calling one DAO from another DAO?

    - by es11
    Can this ever make sense? Say I need to fetch an object from the DB which has a relation to another object (represented by a foreign key in the DB, and by a composition in my domain object). If in my first DAO I fetch the data for object 1, then call the dao for object 2, and finally (from within the first DAO, call the setter in object 1 and give it the previously fetched object 2). I know I could do a join instead, but it just seems more logical to me to decouple the functionality (which is why I am skeptical about calling one dao from another). Or should I move some of the logic to the service layer? Thanks Update: I think I solved the problem with help from the answers: all I needed to do was add the following to my mapping of Object 1: <one-to-one name="Object2" fetch="join" class="com...Object2"></one-to-one> I didn't have to change anything else. Thanks for the help!

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • Can I pass data into a HashMap<String,Object> from JSP to a JavaBean?

    - by Parris
    Hi Everyone, I am just starting out with JSP, Java, etc web development... I would love to use some sort of framework, but for this project I can't do that. In any case I want to potentially pass essentially limitless data (for flexibility) to my javabeans. My idea was if I can have key value pairs that would really easy. The values will always be strings or integers. HashMap seems ideal in this case. Is this possible? Any ideas? Can I do this with JSP Bean tags or should I write scriptlets? Thanks!!!

    Read the article

  • Facebook Connect - Mobile

    - by Jayrox
    I am currently in the process of creating a mobile version of my web app. The app is being developed with Facebook's PHP Client Library. The issue: I am using the following mobile url to allow users to log in using the mobile devices: http://m.facebook.com/tos.php?api_key=APIKEY&v=1.0&next=http%3A%2F%2Ftweelay.net%2Fm.php&cancel=http%3A%2F%2Ftweelay.net%2Fm.php APIKEY being my app's actual Facebook API key. In the url I am telling Facebook to redirect the user back to http://tweelay.net/m.php when the user signs in or clicks cancel on the log in screen. I am pulling my hair trying to figure out why it keeps sending the user to http://m.tweelay.net/m.php which is currently an invalid end point. I have gone through all of my app's settings on Facebook and I cant find any that reference http://m.tweelay.net and going through all of my source code I cant find any that reference the m. sub-domain either.

    Read the article

  • How does MTOM work + sample code

    - by zengr
    I am trying to make a very simple web-service which does the following: The client hits the web service requesting a file. The web service's service class queries a hashtable which has the key (search query) and the value as the base64encoded value of a file (say a pdf) Now,I need to use MTOM to return the base64encoded value stored in the hashtable to the client. It's upto the client to decode it and convert it to pdf. So, here are my questions: I understand we encode files to base64 for transmission via web service, but where and how does MTOM come into the picture there? Can some one provide me a simple method which uses MTOM and sends the data back. Do we need to specify something in the WSDL too? or a simple String return type would suffice? Why/Why not? Thanks I have seen this code. It uses a lot of annotations, I just need a simple java code using MTOM. New to J2EE HERE :)

    Read the article

  • How to have ListView items with Label show up at the bottom of the item not to the right in WPF?

    - by Joan Venge
    I am using a WrapPanel with an Image and Label, but the Label shows up to the right of the item. How can I make it show up at the bottom of the Image/Item? <Window.Resources> <DataTemplate x:Key="ItemTemplate"> <WrapPanel Orientation="Horizontal"> <Image Width="50" Height="50" Stretch="Fill" Source="{Binding Cover}"/> <Label Content="{Binding Title}" /> </WrapPanel> </DataTemplate> </Window.Resources>

    Read the article

  • Best practices for managing limited client licenses/login

    - by MicSim
    I have a multi-user software solution (containing different applications, i.e. EXEs) that should allow only a limited number of concurrent users. It's designed to run in an intranet. I dont have a really good, satisfactory solution to the problem of counting the client licenses yet. The key requirements are: Multiple instances (starts) of the same application (= process) should count as only one client licence Starting different applications of the software solution should also count as only one (the same) client licence Application crash should not lead to orphaned used licences The above should work also for Terminal Server environments (all clients same IP, but different install folders) I'm looking for estabilished patterns, solutions, tips for managing used client licenses. Specific hints for the above sitaution are also welcome.

    Read the article

  • Windows Azure Service Bus Scatter-Gather Implementation

    - by Alan Smith
    One of the more challenging enterprise integration patterns that developers may wish to implement is the Scatter-Gather pattern. In this article I will show the basic implementation of a scatter-gather pattern using the topic-subscription model of the windows azure service bus. I’ll be using the implementation in demos, and also as a lab in my training courses, and the pattern will also be included in the next release of my free e-book the “Windows Azure Service Bus Developer Guide”. The Scatter-Gather pattern answers the following scenario. How do you maintain the overall message flow when a message needs to be sent to multiple recipients, each of which may send a reply? Use a Scatter-Gather that broadcasts a message to multiple recipients and re-aggregates the responses back into a single message. The Enterprise Integration Patterns website provides a description of the Scatter-Gather pattern here.   The scatter-gather pattern uses a composite of the publish-subscribe channel pattern and the aggregator pattern. The publish-subscribe channel is used to broadcast messages to a number of receivers, and the aggregator is used to gather the response messages and aggregate them together to form a single message. Scatter-Gather Scenario The scenario for this scatter-gather implementation is an application that allows users to answer questions in a poll based voting scenario. A poll manager application will be used to broadcast questions to users, the users will use a voting application that will receive and display the questions and send the votes back to the poll manager. The poll manager application will receive the users’ votes and aggregate them together to display the results. The scenario should be able to scale to support a large number of users.   Scatter-Gather Implementation The diagram below shows the overall architecture for the scatter-gather implementation.       Messaging Entities Looking at the scatter-gather pattern diagram it can be seen that the topic-subscription architecture is well suited for broadcasting a message to a number of subscribers. The poll manager application can send the question messages to a topic, and each voting application can receive the question message on its own subscription. The static limit of 2,000 subscriptions per topic in the current release means that 2,000 voting applications can receive question messages and take part in voting. The vote messages can then be sent to the poll manager application using a queue. The voting applications will send their vote messages to the queue, and the poll manager will receive and process the vote messages. The questions topic and answer queue are created using the Windows Azure Developer Portal. Each instance of the voting application will create its own subscription in the questions topic when it starts, allowing the question messages to be broadcast to all subscribing voting applications. Data Contracts Two simple data contracts will be used to serialize the questions and votes as brokered messages. The code for these is shown below.   [DataContract] public class Question {     [DataMember]     public string QuestionText { get; set; } }     To keep the implementation of the voting functionality simple and focus on the pattern implementation, the users can only vote yes or no to the questions.   [DataContract] public class Vote {     [DataMember]     public string QuestionText { get; set; }       [DataMember]     public bool IsYes { get; set; } }     Poll Manager Application The poll manager application has been implemented as a simple WPF application; the user interface is shown below. A question can be entered in the text box, and sent to the topic by clicking the Add button. The topic and subscriptions used for broadcasting the messages are shown in a TreeView control. The questions that have been broadcast and the resulting votes are shown in a ListView control. When the application is started any existing subscriptions are cleared form the topic, clients are then created for the questions topic and votes queue, along with background workers for receiving and processing the vote messages, and updating the display of subscriptions.   public MainWindow() {     InitializeComponent();       // Create a new results list and data bind it.     Results = new ObservableCollection<Result>();     lsvResults.ItemsSource = Results;       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Clear out any old subscriptions.     NamespaceManager = new NamespaceManager(serviceBusUri, credentials);     IEnumerable<SubscriptionDescription> subs =         NamespaceManager.GetSubscriptions(AccountDetails.ScatterGatherTopic);     foreach (SubscriptionDescription sub in subs)     {         NamespaceManager.DeleteSubscription(sub.TopicPath, sub.Name);     }       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Create the topic and queue clients.     ScatterGatherTopicClient =         factory.CreateTopicClient(AccountDetails.ScatterGatherTopic);     ScatterGatherQueueClient =         factory.CreateQueueClient(AccountDetails.ScatterGatherQueue);       // Start the background worker threads.     VotesBackgroundWorker = new BackgroundWorker();     VotesBackgroundWorker.DoWork += new DoWorkEventHandler(ReceiveMessages);     VotesBackgroundWorker.RunWorkerAsync();       SubscriptionsBackgroundWorker = new BackgroundWorker();     SubscriptionsBackgroundWorker.DoWork += new DoWorkEventHandler(UpdateSubscriptions);     SubscriptionsBackgroundWorker.RunWorkerAsync(); }     When the poll manager user nters a question in the text box and clicks the Add button a question message is created and sent to the topic. This message will be broadcast to all the subscribing voting applications. An instance of the Result class is also created to keep track of the votes cast, this is then added to an observable collection named Results, which is data-bound to the ListView control.   private void btnAddQuestion_Click(object sender, RoutedEventArgs e) {     // Create a new result for recording votes.     Result result = new Result()     {         Question = txtQuestion.Text     };     Results.Add(result);       // Send the question to the topic     Question question = new Question()     {         QuestionText = result.Question     };     BrokeredMessage msg = new BrokeredMessage(question);     ScatterGatherTopicClient.Send(msg);       txtQuestion.Text = ""; }     The Results class is implemented as follows.   public class Result : INotifyPropertyChanged {     public string Question { get; set; }       private int m_YesVotes;     private int m_NoVotes;       public event PropertyChangedEventHandler PropertyChanged;       public int YesVotes     {         get { return m_YesVotes; }         set         {             m_YesVotes = value;             NotifyPropertyChanged("YesVotes");         }     }       public int NoVotes     {         get { return m_NoVotes; }         set         {             m_NoVotes = value;             NotifyPropertyChanged("NoVotes");         }     }       private void NotifyPropertyChanged(string prop)     {         if(PropertyChanged != null)         {             PropertyChanged(this, new PropertyChangedEventArgs(prop));         }     } }     The INotifyPropertyChanged interface is implemented so that changes to the number of yes and no votes will be updated in the ListView control. Receiving the vote messages from the voting applications is done asynchronously, using a background worker thread.   // This runs on a background worker. private void ReceiveMessages(object sender, DoWorkEventArgs e) {     while (true)     {         // Receive a vote message from the queue         BrokeredMessage msg = ScatterGatherQueueClient.Receive();         if (msg != null)         {             // Deserialize the message.             Vote vote = msg.GetBody<Vote>();               // Update the results.             foreach (Result result in Results)             {                 if (result.Question.Equals(vote.QuestionText))                 {                     if (vote.IsYes)                     {                         result.YesVotes++;                     }                     else                     {                         result.NoVotes++;                     }                     break;                 }             }               // Mark the message as complete.             msg.Complete();         }       } }     When a vote message is received, the result that matches the vote question is updated with the vote from the user. The message is then marked as complete. A second background thread is used to update the display of subscriptions in the TreeView, with a dispatcher used to update the user interface. // This runs on a background worker. private void UpdateSubscriptions(object sender, DoWorkEventArgs e) {     while (true)     {         // Get a list of subscriptions.         IEnumerable<SubscriptionDescription> subscriptions =             NamespaceManager.GetSubscriptions(AccountDetails.ScatterGatherTopic);           // Update the user interface.         SimpleDelegate setQuestion = delegate()         {             trvSubscriptions.Items.Clear();             TreeViewItem topicItem = new TreeViewItem()             {                 Header = AccountDetails.ScatterGatherTopic             };               foreach (SubscriptionDescription subscription in subscriptions)             {                 TreeViewItem subscriptionItem = new TreeViewItem()                 {                     Header = subscription.Name                 };                 topicItem.Items.Add(subscriptionItem);             }             trvSubscriptions.Items.Add(topicItem);               topicItem.ExpandSubtree();         };         this.Dispatcher.BeginInvoke(DispatcherPriority.Send, setQuestion);           Thread.Sleep(3000);     } }       Voting Application The voting application is implemented as another WPF application. This one is more basic, and allows the user to vote “Yes” or “No” for the questions sent by the poll manager application. The user interface for that application is shown below. When an instance of the voting application is created it will create a subscription in the questions topic using a GUID as the subscription name. The application can then receive copies of every question message that is sent to the topic. Clients for the new subscription and the votes queue are created, along with a background worker to receive the question messages. The voting application is set to receiving mode, meaning it is ready to receive a question message from the subscription.   public MainWindow() {     InitializeComponent();       // Set the mode to receiving.     IsReceiving = true;       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Create a subcription for this instance     NamespaceManager mgr = new NamespaceManager(serviceBusUri, credentials);     string subscriptionName = Guid.NewGuid().ToString();     mgr.CreateSubscription(AccountDetails.ScatterGatherTopic, subscriptionName);       // Create the subscription and queue clients.     ScatterGatherSubscriptionClient = factory.CreateSubscriptionClient         (AccountDetails.ScatterGatherTopic, subscriptionName);     ScatterGatherQueueClient =         factory.CreateQueueClient(AccountDetails.ScatterGatherQueue);       // Start the background worker thread.     BackgroundWorker = new BackgroundWorker();     BackgroundWorker.DoWork += new DoWorkEventHandler(ReceiveMessages);     BackgroundWorker.RunWorkerAsync(); }     I took the inspiration for creating the subscriptions in the voting application from the chat application that uses topics and subscriptions blogged by Ovais Akhter here. The method that receives the question messages runs on a background thread. If the application is in receive mode, a question message will be received from the subscription, the question will be displayed in the user interface, the voting buttons enabled, and IsReceiving set to false to prevent more questing from being received before the current one is answered.   // This runs on a background worker. private void ReceiveMessages(object sender, DoWorkEventArgs e) {     while (true)     {         if (IsReceiving)         {             // Receive a question message from the topic.             BrokeredMessage msg = ScatterGatherSubscriptionClient.Receive();             if (msg != null)             {                 // Deserialize the message.                 Question question = msg.GetBody<Question>();                   // Update the user interface.                 SimpleDelegate setQuestion = delegate()                 {                     lblQuestion.Content = question.QuestionText;                     btnYes.IsEnabled = true;                     btnNo.IsEnabled = true;                 };                 this.Dispatcher.BeginInvoke(DispatcherPriority.Send, setQuestion);                 IsReceiving = false;                   // Mark the message as complete.                 msg.Complete();             }         }         else         {             Thread.Sleep(1000);         }     } }     When the user clicks on the Yes or No button, the btnVote_Click method is called. This will create a new Vote data contract with the appropriate question and answer and send the message to the poll manager application using the votes queue. The user voting buttons are then disabled, the question text cleared, and the IsReceiving flag set to true to allow a new message to be received.   private void btnVote_Click(object sender, RoutedEventArgs e) {     // Create a new vote.     Vote vote = new Vote()     {         QuestionText = (string)lblQuestion.Content,         IsYes = ((sender as Button).Content as string).Equals("Yes")     };       // Send the vote message.     BrokeredMessage msg = new BrokeredMessage(vote);     ScatterGatherQueueClient.Send(msg);       // Update the user interface.     lblQuestion.Content = "";     btnYes.IsEnabled = false;     btnNo.IsEnabled = false;     IsReceiving = true; }     Testing the Application In order to test the application, an instance of the poll manager application is started; the user interface is shown below. As no instances of the voting application have been created there are no subscriptions present in the topic. When an instance of the voting application is created the subscription will be displayed in the poll manager. Now that a voting application is subscribing, a questing can be sent from the poll manager application. When the message is sent to the topic, the voting application will receive the message and display the question. The voter can then answer the question by clicking on the appropriate button. The results of the vote are updated in the poll manager application. When two more instances of the voting application are created, the poll manager will display the new subscriptions. More questions can then be broadcast to the voting applications. As the question messages are queued up in the subscription for each voting application, the users can answer the questions in their own time. The vote messages will be received by the poll manager application and aggregated to display the results. The screenshots of the applications part way through voting are shown below. The messages for each voting application are queued up in sequence on the voting application subscriptions, allowing the questions to be answered at different speeds by the voters.

    Read the article

  • How to make TWebBrower ignore accelerator chars of others controls?

    - by douglaslise
    I have a TWebBrowser placed on a form with the designMode enabled. Bellow the browser I have a close button with the Caption setted to 'Clos&e'. When I am editing the contents of a document inside the WebBrowser and I press the key E the button close is called. It appears that it is treating TWebBrowser like other controls that don't handle keys (e.g. TButton). How can I solve this? Thanks in advance.

    Read the article

  • RegEx to extract all HTML tag attributes including inline JavaScript

    - by Mike
    I found this useful regex code here while looking to parse HTML tag attributes: (\S+)=["']?((?:.(?!["']?\s+(?:\S+)=|[>"']))+.)["']? It works great, but it's missing one key element that I need. Some attributes are event triggers that have inline Javascript code in them like this: onclick="doSomething(this, 'foo', 'bar');return false;" Or: onclick='doSomething(this, "foo", "bar");return false;' I can't figure out how to get the original expression to not count the quotes from the JS (single or double) while it's nested inside the set of quotes that contain the attribute's value.

    Read the article

  • Get directory contents in date modified order

    - by nevan
    Is there an method to get the contents of a folder in a particular order? I'd like an array of file attribute dictionaries (or just file names) ordered by date modified. Right now, I'm doing it this way: get an array with the file names get the attributes of each file store the file's path and modified date in a dictionary with the date as a key Next I have to output the dictionary in date order, but I wondered if there's an easier way? If not, is there a code snippet somewhere which will do this for me? Thanks.

    Read the article

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

  • Floating point arithmetic is too reliable.

    - by mcoolbeth
    I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg: static void Main(string[] args) { double x = 0, y = 0; x += 20013.8; x += 20012.7; y += 10016.4; y += 30010.1; Console.WriteLine("Result: "+ x + " " + y + " " + (x==y)); Console.Write("Press any key to continue . . . "); Console.ReadKey(true); } However, in this case, x and y are equal in the end. Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >