Search Results

Search found 2187 results on 88 pages for 'combined fragment'.

Page 75/88 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • NSXMLDocument objectByApplyingXSLT with XSL Include

    - by Kristof
    I'm having some trouble with XSL-processing when there are stylesheets that include other stylesheets relatively. (the XML-files may be irrelevant but are included for completeness - code is at the bottom). Given the XML-file: <?xml version="1.0" ?> <famous-persons> <persons category="medicine"> <person> <firstname> Edward </firstname> <name> Jenner </name> </person> <person> <firstname> Gertrude </firstname> <name> Elion </name> </person> </persons> </famous-persons> and the XSL-file: <?xml version="1.0" ?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xi="http://www.w3.org/2001/XInclude" version="1.0"> <xsl:template match="/"> <html><head><title>Sorting example</title></head><body> <xsl:apply-templates select="famous-persons/persons"> <xsl:sort select="@category" /> </xsl:apply-templates> </body></html> </xsl:template> <xsl:include href="included.xsl" /> </xsl:stylesheet> referencing this stylesheet in included.xsl: <?xml version="1.0"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xi="http://www.w3.org/2001/XInclude" version="1.0"> <xsl:template match="persons"> <h2><xsl:value-of select="@category" /></h2> <ul>Irrelevant</ul> </xsl:template> </xsl:stylesheet> how can I make it that the following code fragment: NSError *lError = nil; NSXMLDocument *lDocument = [ [ NSXMLDocument alloc ] initWithContentsOfURL: [ NSURL URLWithString: @"file:///pathto/data.xml" ] options: 0 error: &lError ]; NSXMLDocument *lResult = [ lDocument objectByApplyingXSLTAtURL: [ NSURL URLWithString: @"file:///pathto/style.xsl" ] arguments: nil error: nil ]; does not give me the error: I/O warning : failed to load external entity "included.xsl" compilation error: element include xsl:include : unable to load included.xsl I have been trying all sorts of options. Also loading XML documents with NSXMLDocumentXInclude beforehand does not seem to help. Is there any way to make the XSL processing so that a stylesheet can include another stylesheet in its local path?

    Read the article

  • set the focus on the scroller when button is clicked

    - by liz
    hi i have a script <script type="text/javascript"> window.addEvent('domready', function(){ var totIncrement = 0; var increment = 560; var maxRightIncrement = increment*(-6); var fx = new Fx.Style('slider-list', 'margin-left', { duration: 1000, transition: Fx.Transitions.Back.easeInOut, wait: true }); //------------------------------------- // EVENTS for the button "previous" $('previous').addEvents({ 'click' : function(event){ if(totIncrement<0){ totIncrement = totIncrement+increment; fx.stop() fx.start(totIncrement); } } }); //------------------------------------- // EVENTS for the button "next" $('next').addEvents({ 'click' : function(event){ if(totIncrement>maxRightIncrement){ totIncrement = totIncrement-increment; fx.stop() fx.start(totIncrement); } } }) }); </script> in mootools v1.1 it makes a scroller function at the bottom of my html page. but when i click the next button the page's focus moves to the top of the page. how do i keep it on the scroller? this is the html fragment: <h3>Our Pastas</h3> <div id="slider-buttons"> <a href="#" id="previous">Previous</a> | <a href="#" id="next">Next</a&gt; </div> <div id="slider-stage"> <ul id="slider-list"> <li class="list_item"> <div id="thumbnail"><a href="xxx/product-catalog/pasta/long-pasta-in-brown-bags/bucatini"><img src="xxx/images/stories/products/_thumb1/bucatini.gif"></a></div><h4><a href="xxx/product-catalog/pasta/long-pasta-in-brown-bags/bucatini">Rustichella d'Abruzzo Bucatini</a></h4> </li> <li class="list_item"> <div id="thumbnail"><a href="xxx/product-catalog/pasta/pasta-in-trays/calamarata"><img src="xxx/images/stories/products/_thumb1/calamarata.jpg"></a></div><h4><a href="xxx/product-catalog/pasta/pasta-in-trays/calamarata">Rustichella d'Abruzzo Calamarata</a></h4> </li> <li class="list_item"> <div id="thumbnail"><a href="xxx/product-catalog/pasta/pasta-in-trays/cannolicchi"><img src="xxx/images/stories/products/_thumb1/cannolicchi.jpg"></a></div><h4><a href="xxx/product-catalog/pasta/pasta-in-trays/cannolicchi">Rustichella d'Abruzzo Cannolicchi</a></h4> </li> </ul></div>

    Read the article

  • Get Ajax4JSF (a4j component) running on Glassfish

    - by yournamehere
    I'm trying to build an JEE6-application on Glassfish V3, using JSF 2.0, Weld, JPA2 and Maven. Now i'm having trouble getting a simple <a4j:support> running. This is the fragment of my little example. When typing something into the inputtext, the outputtext should automatically be updated. But nothing happens (not in Firefox, not in IE8). <ui:composition xmlns:a4j="https://ajax4jsf.dev.java.net/ajax" (...)> <h:inputText value="#{personHome.message}"> <a4j:support event="onkeyup" reRender="repeater"/> </h:inputText> <h:outputText id="repeater" value="#{personHome.message}"/> Beside that my example doesn't work, my problem is also that i don't really understand if i need a JSF implementation (MyFaces, Richfaces, Primefaces etc.) or not to use a4j elements. Is it "built-in" in glassfish? Until now, i only have the following dependencies i think i need in for JSF: <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.0.2</version> </dependency> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-impl</artifactId> <version>2.0.2</version> </dependency>I'm trying to build an JEE6-application on Glassfish V3, using JSF 2.0, Weld, JPA2 and Maven. <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency> So... what do i have to do to get Ajax4JSF running on a simple JEE-App on Glassfish? Any help is highly appreciated!

    Read the article

  • Why does this JSON.parse code not work?

    - by SuZi
    I am trying to pass json encoded values from a php script to a, GnuBookTest.js, javascript file that initiates a Bookreader object and use the values i have passed in via the variable i named "result". The php script is sending the values like: <div id="bookreader"> <div id="BookReader" style="left:10px; right:10px; top:30px; bottom:30px;">x</div> <script type="text/javascript">var result = {"istack":"zi94sm65\/BUCY\/BUCY200707170530PM","leafCount":"14","wArr":"[893,893,893,893,893,893,893,893,893,893,893,893,893,893]","hArr":"[1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155,1155]","leafArr":"[0,1,2,3,4,5,6,7,8,9,10,11,12,13]","sd":"[\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\",\"RIGHT\",\"LEFT\"]"}</script> <script type="text/javascript" src="http://localhost:8080/application/js/GnuBookTest.js"></script> </div> </div> and in the GnuBookTest.js file i am trying to use the values like: br = new BookReader(); // Return the width of a given page. br.getPageWidth = function(index) { return this.pageW[index]; } // Return the height of a given page. br.getPageHeight = function(index) { return this.pageH[index]; } br.pageW = JSON.parse(result.wArr); br.pageH = JSON.parse(result.hArr); br.leafMap = JSON.parse(result.leafArr); //istack is an url fragment for location of image files var istack = result.istack; . . . Using JSON.parse as i have written it above loads the Bookreader and uses my values correctly in a few web-browsers: Firefox, IE8, and desktop-Safari; but does not work at all in mac-Chrome, mobile-Safari, plus older versions of IE. Mobile safari keeps giving me a reference error msg: can't find variable: JSON. The other browsers just do not load the Bookreader and show the "x" instead, like they did not get the values from the php script. Where is the problem?

    Read the article

  • Android spinner inside a custom control - OnItemSelectedListener does not trigger

    - by Idan
    I am writing a custom control that extends LinearLayout. Inside that control I am using a spinner to let the user select an item from a list. The problem I have is that the OnItemSelectedListener event does not fire. When moving the same code to an Activity/Fragment all is working just fine. I have followed some answers that was given to others asking about the same issue, and nothing helped. still the event does not fire. This is my code after I followed the answers that suggested to put the spinner inside my layout XML instead of by code. I am getting the same result when I try to just "new Spinner(ctx)"... layout XML: <Spinner android:id="@+id/accSpinner" android:layout_width="0dip" android:layout_height="0dip" /> Initialization function of the control (called on the control constructor): private void init() { LayoutInflater layoutInflater = LayoutInflater.from(mContext); mAccountBoxView = layoutInflater.inflate(R.layout.control_accountselector, null); mTxtAccount = (TextView)mAccountBoxView.findViewById(R.id.txtAccount); mSpinner = (Spinner)mAccountBoxView.findViewById(R.id.accSpinner); mAccountBoxView.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { mSpinner.performClick(); } }); setSpinner(); addView(mAccountBoxView); } private void setSpinner() { ArrayAdapter<String> dataAdapter = new ArrayAdapter<String>(mContext, android.R.layout.simple_spinner_item, mItems); dataAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); mSpinner.setAdapter(dataAdapter); mSpinner.setOnItemSelectedListener(new OnItemSelectedListener() { @Override public void onItemSelected(AdapterView<?> parent, View view, int position, long id) { String selectedItem = mItems.get(position); handleSelectedItem(selectedItem); } @Override public void onNothingSelected(AdapterView<?> parent) { } }); } The spinner raises just fine when i touch my control and the list of items is there as it should. When I click an item the spinner closes but I am never getting to onItemSelected nor onNothingSelected.. Any ideas?

    Read the article

  • XPath query returning 'false' in SimpleXML

    - by Drew
    Hi all, I have an xml fragment as such: <meta_tree type="root"> <meta_data> <meta_cat>Content Provider</meta_cat> <data>Mammoth</data> </meta_data> <meta_data> <meta_cat>Genre</meta_cat> <data>Games</data> </meta_data> <meta_data> <meta_cat>Channel Name</meta_cat> <data>Games Trailers</data> </meta_data> <meta_data> <meta_cat>Collection</meta_cat> <data>Strategy</data> </meta_data> <meta_data> <meta_cat>Custom 1</meta_cat> <data>PC</data> </meta_data> <meta_data> <meta_cat>DRM Protected</meta_cat> <data>N</data> </meta_data> <meta_data> <meta_cat>Aspect Ratio</meta_cat> <data>16:9</data> </meta_data> <meta_data> <meta_cat>Streaming Type</meta_cat> <data>VOD</data> </meta_data> </meta_tree> which I garnered from the snippet of $meta_tree->asXML(). So given that, I need to have an xpath query for each element, so I'm using: $meta_tree->xpath("/meta_data[meta_cat='Content Provider']"); but this returns false. I have tried: "/meta_tree/meta_data[meta_cat='Content Provider']" "//meta_data[meta_cat='Content Provider']" I've been using AquaPath, which validates my query, so I'm not sure what I'm doing wrong. Anyone got any ideas? DJS.

    Read the article

  • How to obtain a pointer out of a C++ vtable?

    - by Josh Haberman
    Say you have a C++ class like: class Foo { public: virtual ~Foo() {} virtual DoSomething() = 0; }; The C++ compiler translates a call into a vtable lookup: Foo* foo; // Translated by C++ to: // foo->vtable->DoSomething(foo); foo->DoSomething(); Suppose I was writing a JIT compiler and I wanted to obtain the address of the DoSomething() function for a particular instance of class Foo, so I can generate code that jumps to it directly instead of doing a table lookup and an indirect branch. My questions are: Is there any standard C++ way to do this (I'm almost sure the answer is no, but wanted to ask for the sake of completeness). Is there any remotely compiler-independent way of doing this, like a library someone has implemented that provides an API for accessing a vtable? I'm open to completely hacks, if they will work. For example, if I created my own derived class and could determine the address of its DoSomething method, I could assume that the vtable is the first (hidden) member of Foo and search through its vtable until I find my pointer value. However, I don't know a way of getting this address: if I write &DerivedFoo::DoSomething I get a pointer-to-member, which is something totally different. Maybe I could turn the pointer-to-member into the vtable offset. When I compile the following: class Foo { public: virtual ~Foo() {} virtual void DoSomething() = 0; }; void foo(Foo *f, void (Foo::*member)()) { (f->*member)(); } On GCC/x86-64, I get this assembly output: Disassembly of section .text: 0000000000000000 <_Z3fooP3FooMS_FvvE>: 0: 40 f6 c6 01 test sil,0x1 4: 48 89 74 24 e8 mov QWORD PTR [rsp-0x18],rsi 9: 48 89 54 24 f0 mov QWORD PTR [rsp-0x10],rdx e: 74 10 je 20 <_Z3fooP3FooMS_FvvE+0x20> 10: 48 01 d7 add rdi,rdx 13: 48 8b 07 mov rax,QWORD PTR [rdi] 16: 48 8b 74 30 ff mov rsi,QWORD PTR [rax+rsi*1-0x1] 1b: ff e6 jmp rsi 1d: 0f 1f 00 nop DWORD PTR [rax] 20: 48 01 d7 add rdi,rdx 23: ff e6 jmp rsi I don't fully understand what's going on here, but if I could reverse-engineer this or use an ABI spec I could generate a fragment like the above for each separate platform, as a way of obtaining a pointer out of a vtable.

    Read the article

  • How can I stop Flash from changing indent when user Clicks on hyperlink in TextField?

    - by Paul Chernoch
    I have a TextField which I initialize by setting htmlText. The text has anchor tags (hyperlinks). When a user clicks on the hyperlink, the indentation of the second and subsequent lines in the paragraph changes. Why? How do I stop it? My html has an image at the beginning of the line, followed by the tag, followed by more text. To style the hyper links to look blue always and underlined when the mouse is over them, I do this: var css:StyleSheet = new StyleSheet(); css.parseCSS("a {color: #0000FF;} a:hover {text-decoration: underline;}"); stepText.styleSheet = css; stepText.htmlText = textToUse; stepText.visible = true; Here is a fragment of the html text (with newlines and exrta whitespace added to improve readability - originally it was one long line): <textformat indent="-37" blockindent="37" > <img src="media/interface/level-1-bullets/solid-circle.png" align="left" hspace="8" vspace="1"/> American Dental Association. (n.d.). <i>Cleaning your teeth and gums (oral hygiene)</i>. Retrieved 11/24/08, from <a href="http://www.ada.org/public/topics/cleaning_faq.asp" target="_blank">http://www.ada.org/public/topics/cleaning_faq.asp </a> </textformat> <br/> As it turns out, the text field is of a width such that it wraps and the second line starts with "Retrieved 11/24/08". Clicking on the hyper link causes this particular line to be indented. Subsequent paragraphs are not affected. ASIDE: The image is a list bullet about 37 pixels wide. (I used images instead of li tags because Flash does not allow nested lists, so I faked it using a series of images with varying amounts of whitespace to simulate three levels of indentation.) IDEA: I was thinking of changing all hyperlinks to use "event:" as the URL protocol, which causes a TextEvent.LINK event to be triggered instead of following the link. Then I would have to open the browser in a second call. I could use this event handler to set the html text to itself, which might clear the problem. (When I switch pages in my application and then come back to the page, everything is OKAY again.) PROBLEM: If I use the "event:" protocol and user tries the right-mouse button click, they will get an error, or so I am told. (See http://www.blog.lessrain.com/as3-texteventlink-and-contextmenu-incompatibilities/ ) I do not like this trade-off.

    Read the article

  • asp:Button is not calling server-side function

    - by Richard Neil Ilagan
    Hi guys, I know that there has been much discussion here about this topic, but none of the threads I got across helped me solve this problem. I'm hoping that mine is somewhat unique, and may actually merit a different solution. I'm instantiating an asp:Button inside a data-bound asp:GridView through template fields. Some of the buttons are supposed to call a server-side function, but for some weird reason, it doesn't. All the buttons do when you click them is fire a postback to the current page, doing nothing, effectively just reloading the page. Below is a fragment of the code: <asp:GridView ID="gv" runat="server" AutoGenerateColumns="false" CssClass="l2 submissions" ShowHeader="false"> <Columns> <asp:TemplateField> <ItemTemplate><asp:Panel ID="swatchpanel" CssClass='<%# Bind("status") %>' runat="server"></asp:Panel></ItemTemplate> <ItemStyle Width="50px" CssClass="sw" /> </asp:TemplateField> <asp:BoundField DataField="description" ReadOnly="true"> </asp:BoundField> <asp:BoundField DataField="owner" ReadOnly="true"> <ItemStyle Font-Italic="true" /> </asp:BoundField> <asp:BoundField DataField="last-modified" ReadOnly="true"> <ItemStyle Width="100px" /> </asp:BoundField> <asp:TemplateField> <ItemTemplate> <asp:Button ID="viewBtn" cssclass='<%# Bind("sid") %>' runat="server" Text="View" OnClick="viewBtnClick" /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> The viewBtn above should call the viewBtnClick() function on server-side. I do have that function defined, along with a proper signature (object,EventArgs). One thing that may be of note is that this code is actually inside an ASCX, which is loaded in another ASCX, finally loaded into an ASPX. Any help or insight into the matter will be SO appreciated. Thanks! (oh, and please don't mind my trashy HTML/CSS semantics - this is still in a very,very early stage :p)

    Read the article

  • stxxl Assertion `it != root_node_.end()' failed

    - by Fabrizio Silvestri
    I am receiving this assertion failed error when trying to insert an element in a stxxl map. The entire assertion error is the following: resCache: /usr/include/stxxl/bits/containers/btree/btree.h:470: std::pair , bool stxxl::btree::btree::insert(const value_type&) [with KeyType = e_my_key, DataType = unsigned int, CompareType = comp_type, unsigned int RawNodeSize = 16384u, unsigned int RawLeafSize = 131072u, PDAllocStrategy = stxxl::SR, stxxl::btree::btree::value_type = std::pair]: Assertion `it != root_node_.end()' failed. Aborted Any idea? Edit: Here's the code fragment void request_handler::handle_request(my_key& query, reply& rep) { c_++; strip(query.content); std::cout << "Received query " << query.content << " by thread " << boost::this_thread::get_id() << ". It is number " << c_ << "\n"; strcpy(element.first.content, query.content); element.second = c_; testcache_.insert(element); STXXL_MSG("Records in map: " << testcache_.size()); } Edit2 here's more details (I omit constants, e.g. MAX_QUERY_LEN) struct comp_type : std::binary_function<my_key, my_key, bool> { bool operator () (const my_key & a, const my_key & b) const { return strncmp(a.content, b.content, MAX_QUERY_LEN) < 0; } static my_key max_value() { return max_key; } static my_key min_value() { return min_key; } }; typedef stxxl::map<my_key, my_data, comp_type> cacheType; cacheType testcache_; request_handler::request_handler() :testcache_(NODE_CACHE_SIZE, LEAF_CACHE_SIZE) { c_ = 0; memset(max_key.content, (std::numeric_limits<unsigned char>::max)(), MAX_QUERY_LEN); memset(min_key.content, (std::numeric_limits<unsigned char>::min)(), MAX_QUERY_LEN); testcache_.enable_prefetching(); STXXL_MSG("Records in map: " << testcache_.size()); }

    Read the article

  • How to reconcile my support of open-source software and need to feed and house myself?

    - by Guzba
    I have a bit of a dilemma and wanted to get some other developers' opinions on it and maybe some guidance. So I have created a 2D game for Android from the ground up, learning and re factoring as I went along. I did it for the experience, and have been proud of the results. I released it for free as ad supported with AdMob not really expecting much out of it, but curious to see what would happen. Its been a few of months now since release, and it has become very popular (250k downloads!). Additionally, the ad revenue is great and is driving me to make more good games and even allowing me to work less so that I can focus on my own works. When I originally began working on the game, I was pretty new to concurrency and completely new to Android (had Java experience though). The standard advice I got for starting an Android game was to look at the sample games from Google (Snake, Lunar Lander, ...) so I did. In my opinion, these Android sample games from Google are decent to see in general what your code should look like, but not actually all that great to follow. This is because some of their features don't work (saving game state), the concurrency is both unexplained and cumbersome (there is no real separation between the game thread and the UI thread since they sync lock each other out all the time and the UI thread runs game thread code). This made it difficult for me as a newbie to concurrency to understand how it was organized and what was really running what code. Here is my dilemma: After spending this past few months slowly improving my code, I feel that it could be very beneficial to developers who are in the same position that I was in when I started. (Since it is not a complex game, but clearly coded in my opinion.) I want to open up the source so that others can learn from it but don't want to lose my ad revenue stream, which, if I did open the source, I fear I would when people released versions with the ad stripped, or minor tweaks that would fragment my audience, etc. I am a CS undergrad major in college and this money is giving me the freedom to work less at summer job, thus giving me the time and will to work on more of my own projects and improving my own skills while still paying the bills. So what do I do? Open the source at personal sacrifice for the greater good, or keep it closed and be a sort of hypocritical supporter of open source?

    Read the article

  • Webservice for uploading data: security considerations

    - by Philip Daubmeier
    Hi everyone! Im not sure about what authentification method I should use for my webservice. I've searched on SO, and found nothing that helped me. Preliminary Im building an application that uploads data from a local database to a server (running my webservice), where all records are merged and stored in a central database. I am currently binary serializing a DataTable, that holds a small fragment of the local database, where all uninteresting stuff is already filtered out. The byte[] (serialized DataTable), together with the userid and a hash of the users password is then uploaded to the webservice via SOAP. The application together with the webservice already work exactly like intended. The Problem The issue I am thinking about is now: What is if someone just sniffs the network traffic, 'steals' the users id and password hash to send his own SOAP message with modified data that corrupts my database? Options The approaches to solving that problem, I already thought of, are: Using ssl + certificates for establishing the connection: I dont really want to use ssl, I would prefer a simpler solution. After all, every information that is transfered to the webservice can be seen on the website later on. What I want to say is: there is no secret/financial/business-critical information, that has to be hidden. I think ssl would be sort of an overkill for that task. Encrypting the byte[]: I think that would be a performance killer, considering that the goal of the excercise was simply to authenticate the user. Hashing the users password together with the data: I kind of like the idea: Creating a checksum from the data, concatenating that checksum with the password-hash and hashing this whole thing again. That would assure the data was sent from this specific user, and the data wasnt modified. The actual question So, what do you think is the best approach in terms of meeting the following requirements? Rather simple solution (As it doesnt have to be super secure; no secret/business-critical information transfered) Easily implementable retrospectively (Dont want to write it all again :) ) Doesnt impact to much on performance What do you think of my prefered solution, the last one in the list above? Is there any alternative solution I didnt mention, that would fit better? You dont have to answer every question in detail. Just push me in the right direction. I very much appreciate every well-grounded opinion. Thanks in advance!

    Read the article

  • Unable to get Stencil Buffer to work in iOS 4+ (5.0 works fine). [OpenGL ES 2.0]

    - by MurderDev
    So I am trying to use a stencil buffer in iOS for masking/clipping purposes. Do you guys have any idea why this code may not work? This is everything I have associated with Stencils. On iOS 4 I get a black screen. On iOS 5 I get exactly what I expect. The transparent areas of the image I drew in the stencil are the only areas being drawn later. Code is below. This is where I setup the frameBuffer, depth and stencil. In iOS the depth and stencil are combined. -(void)setupDepthBuffer { glGenRenderbuffers(1, &depthRenderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, self.frame.size.width * [[UIScreen mainScreen] scale], self.frame.size.height * [[UIScreen mainScreen] scale]); } -(void)setupFrameBuffer { glGenFramebuffers(1, &frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); // Check the FBO. if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { NSLog(@"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER)); } } This is how I am setting up and drawing the stencil. (Shader code below.) glEnable(GL_STENCIL_TEST); glDisable(GL_DEPTH_TEST); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); glStencilFunc(GL_ALWAYS, 1, -1); glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glColorMask(0, 0, 0, 0); glClear(GL_STENCIL_BUFFER_BIT); machineForeground.shader = [StencilEffect sharedInstance]; [machineForeground draw]; machineForeground.shader = [BasicEffect sharedInstance]; glDisable(GL_STENCIL_TEST); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask(GL_TRUE); Here is where I am using the stencil. glEnable(GL_STENCIL_TEST); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_EQUAL, 1, -1); ...Draw Stuff here glDisable(GL_STENCIL_TEST); Finally here is my fragment shader. varying lowp vec2 TexCoordOut; uniform sampler2D Texture; void main(void) { lowp vec4 color = texture2D(Texture, TexCoordOut); if(color.a < 0.1) gl_FragColor = color; else discard; }

    Read the article

  • Display BLOB (image) through JSP

    - by jMarcel
    I have a code to show a chart o employees. The data (name, phone, photo etc) are stored in SQLServer and displayed through JSP. Showing the data is ok, except the image .jpg (stored in IMAGE=BLOB column). By the way, I've already got the image displayed (see code below), but I dont't know how to put it in the area defined in a .css (see code below, too), since the image got through the resultSet is loaded in the whole page in the browser. Does anyone knows how can I 'frame' the image ? <% Connection con = FactoryConnection_SQL_SERVER.getConnection("empCHART"); Statement stSuper = con.createStatement(); Statement stSetor = con.createStatement(); Blob image = null; byte[] imgData = null; ResultSet rsSuper = stSuper.executeQuery("SELECT * FROM funChart WHERE dept = 'myDept'"); if (rsSuper.next()) { image = rsSuper.getBlob(12); imgData = image.getBytes(1, (int) image.length()); response.setContentType("image/gif"); OutputStream o = response.getOutputStream(); //o.write(imgData); // even here we got the same as below. //o.flush(); //o.close(); --[...] <table style="margin: 0px; margin-top: 15px;"> <tr> <td id="photo"> <img title="<%=rsSuper.getString("empName").trim()%>" src="<%= o.wite(imageData); o.flush(); o.close(); %>" /> </td> </td> <td id="empData"> <h3><%=rsSuper.getString("empName")%></h3> <p><%=rsSuper.getString("Position")%></p> <p>Id:<br/><%=rsSuper.getString("id")%></p> <p>Phone:<br/><%=rsSuper.getString("Phone")%></p> <p>E-Mail:<br/><%=rsSuper.getString("Email")%></p> </td> </table> And here is the fragment supposed to frame the image: #photo { padding: 0px; vertical-align: middle; text-align: center; width: 170px; height: 220px; } Thanks in advance !

    Read the article

  • Oracle ADF Coverage at OOW

    - by Frank Nimphius
    Below is the schedule for all ADF related sessions at a glance. Note the Meet and greet session added for Wednesday Octiber 3rd from 4.30 pm to 5:30. Oracle ADF and Fusion Development General Session Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM General Session: The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud Marriott Marquis - Salon 8 12:15 PM - 1:15 PM General Session: Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology Moscone West - 3014 1:45 PM - 2:45 PM General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West - 3002/3004 4:45 PM - 5:45 PM General Session: Building Mobile Applications with Oracle Cloud Moscone West - 2002/2004 Conference Session Mon 1 Oct, 2012 Time Title Location 12:15 PM - 1:15 PM Understanding Oracle ADF and Its Role in Oracle Fusion Moscone South - 306 1:45 PM - 2:45 PM Building Performant Oracle ADF Business Components to Meet Tomorrow’s Needs Marriott Marquis - Golden Gate C3 3:15 PM - 4:15 PM End-to-End Oracle ADF Development in Eclipse Marriott Marquis - Golden Gate C3 4:45 PM - 5:45 PM Classic Mistakes with Oracle Application Development Framework Marriott Marquis - Salon 7 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM One Size Doesn’t Fit All: Oracle ADF Architecture Fundamentals Marriott Marquis - Golden Gate C2 10:15 AM - 11:15 AM Oracle Business Process Management/Oracle ADF Integration Best Practices Marriott Marquis - Golden Gate C3 11:45 AM - 12:45 PM Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF Moscone South - 306 11:45 AM - 12:45 PM Secrets of Successful Projects with Oracle Application Development Framework Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM BPM, SOA, and Oracle ADF Combined: Patterns Learned from Oracle Fusion Applications Moscone West - 3003 1:15 PM - 2:15 PM The Future of Forms Is … Oracle Forms (and Friends) Moscone South - 306 5:00 PM - 6:00 PM Best Practices for Integrating SOAP and REST Service into Oracle ADF Marriott Marquis - Golden Gate C2 Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite Moscone West - 3001 10:15 AM - 11:15 AM Visualize This! Best Practices for Data Visualization in Desktop and Mobile Apps Marriott Marquis - Golden Gate C3 10:15 AM - 11:15 AM Set Up Your Oracle ADF Project and Development Team for Productivity: Seven Essential Tips Marriott Marquis - Golden Gate C2 11:45 AM - 12:45 PM How to Migrate an Oracle Forms Application to Oracle ADF Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Oracle ADF: Lessons Learned in Real-World Implementations Moscone South - 309 3:30 PM - 4:30 PM Oracle ADF Implementations Around the Globe: Best Practices Marriott Marquis - Golden Gate C2 3:30 PM - 4:30 PM Oracle Developer Cloud Services Marriott Marquis - Salon 7 4:30 PM - 5:30 PM Oracle JDeveloper and Oracle ADF: What’s New Hilton San Francisco - Continental Ballroom 5 5:00 PM - 6:00 PM Mobile Solutions for Oracle E-Business Suite Applications: Technical Insight Moscone West - 2020 5:00 PM - 6:00 PM Extending Social into Enterprise Applications and Business Processes Marriott Marquis - Golden Gate C3 5:00 PM - 6:00 PM The Tie That Binds: An Introduction to Oracle ADF Bindings Marriott Marquis - Golden Gate C2 Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Moscone West - 3003 11:15 AM - 12:15 PM Deep Dive into Oracle ADF: Advanced Techniques Marriott Marquis - Golden Gate C2 12:45 PM - 1:45 PM Monitor, Analyze, and Troubleshoot Your Oracle ADF Application Marriott Marquis - Golden Gate C2 2:15 PM - 3:15 PM Oracle WebCenter Portal: Creating and Using Content Presenter Templates Marriott Marquis - Golden Gate C2 HOL (Hands-on Lab) Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:45 PM - 2:45 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 4:45 PM - 5:45 PM Application Lifecycle Management with Oracle JDeveloper: Hands-on Lab Marriott Marquis - Salon 3/4 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 11:45 AM - 12:45 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:15 PM - 2:15 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:30 PM - 4:30 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 11:15 AM - 12:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 12:45 PM - 1:45 PM Oracle ADF for Java EE Developers with Oracle Enterprise Pack for Eclipse Marriott Marquis - Salon 3/4 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 Time Title Location 6:15 PM - 7:00 PM How to Get Started with Oracle ADF Marriott Marquis - Club Room 7:15 PM - 8:00 PM Building Next-Generation Applications with Oracle ADF and Oracle BPM Marriott Marquis - Golden Gate C3 7:15 PM - 8:00 PM The Future of Oracle Forms: Upgrade, Modernize, or Migrate? Marriott Marquis - Golden Gate C2 7:15 PM - 8:00 PM Oracle ADF Faces: One Site for Many Devices Marriott Marquis - Golden Gate C1 - User Group Forum (Sunday Only) Sun 30 Sept, 2012 Time Title Location 9:00 AM - 10:00 AM Oracle ADF Immersion: How an Oracle Forms Developer Immersed Himself in the Oracle ADF World Moscone South - 305 10:15 AM - 11:15 AM Deploy with Joy: Using Hudson to Build and Deploy Your Oracle ADF Applications Moscone South - 305 11:30 AM - 12:30 PM ADF EMG User Group: A Peek into the Oracle ADF Architecture of Oracle Fusion Applications Moscone South - 305 12:45 PM - 3:45 PM ADF EMG User Group: Oracle Fusion Middleware Live Application Development Demo Moscone South - 305 3:15 PM - 4:15 PM Mobile Development with Oracle JDeveloper and Oracle ADF Moscone West - 2010 Demos Demo Location Developer Moscone North, Upper Lobby - N-002 Oracle ADF Mobile Development Moscone North, Upper Lobby - N-001 Oracle Eclipse Projects Hilton San Francisco, Grand Ballroom - HHJ-008 Oracle Enterprise Pack for Eclipse Moscone South, Right - S-208 Oracle JDeveloper and Oracle ADF Moscone South, Right - S-207 Exhibits 0 Exhibitor Location Accenture Moscone South - 1813 Moscone South - 2221 Infosys Moscone South - 1701 Moscone South - SMR-005 Innowave Technology Moscone South - 2309 ODTUG Moscone West, Level 2 Lobby - Kiosk in the User Groups Pavilion Oracle ADF Developers Meet Up Wednesday, Oct 03 Time Activity Location 4:30 PM - 5:30 PM Stop by the OTN Lounge and meet other Oracle ADF & Fusion developers as well as product managers and engineers who work on Oracle ADF, ADF Mobile and ADF Essentials. Feedback and questions welcome, or simply stop by and say ‘hi!’ and enjoy free beer. OTN Lounge

    Read the article

  • Linux-Containers — Part 1: Overview

    - by Lenz Grimmer
    "Containers" by Jean-Pierre Martineau (CC BY-NC-SA 2.0). Linux Containers (LXC) provide a means to isolate individual services or applications as well as of a complete Linux operating system from other services running on the same host. To accomplish this, each container gets its own directory structure, network devices, IP addresses and process table. The processes running in other containers or the host system are not visible from inside a container. Additionally, Linux Containers allow for fine granular control of resources like RAM, CPU or disk I/O. Generally speaking, Linux Containers use a completely different approach than "classicial" virtualization technologies like KVM or Xen (on which Oracle VM Server for x86 is based on). An application running inside a container will be executed directly on the operating system kernel of the host system, shielded from all other running processes in a sandbox-like environment. This allows a very direct and fair distribution of CPU and I/O-resources. Linux containers can offer the best possible performance and several possibilities for managing and sharing the resources available. Similar to Containers (or Zones) on Oracle Solaris or FreeBSD jails, the same kernel version runs on the host as well as in the containers; it is not possible to run different Linux kernel versions or other operating systems like Microsoft Windows or Oracle Solaris for x86 inside a container. However, it is possible to run different Linux distribution versions (e.g. Fedora Linux in a container on top of an Oracle Linux host), provided it supports the version of the Linux kernel that runs on the host. This approach has one caveat, though - if any of the containers causes a kernel crash, it will bring down all other containers (and the host system) as well. For example, Oracle's Unbreakable Enterprise Kernel Release 2 (2.6.39) is supported for both Oracle Linux 5 and 6. This makes it possible to run Oracle Linux 5 and 6 container instances on top of an Oracle Linux 6 system. Since Linux Containers are fully implemented on the OS level (the Linux kernel), they can be easily combined with other virtualization technologies. It's certainly possible to set up Linux containers within a virtualized Linux instance that runs inside Oracle VM Server for Oracle VM Virtualbox. Some use cases for Linux Containers include: Consolidation of multiple separate Linux systems on one server: instances of Linux systems that are not performance-critical or only see sporadic use (e.g. a fax or print server or intranet services) do not necessarily need a dedicated server for their operations. These can easily be consolidated to run inside containers on a single server, to preserve energy and rack space. Running multiple instances of an application in parallel, e.g. for different users or customers. Each user receives his "own" application instance, with a defined level of service/performance. This prevents that one user's application could hog the entire system and ensures, that each user only has access to his own data set. It also helps to save main memory — if multiple instances of a same process are running, the Linux kernel can share memory pages that are identical and unchanged across all application instances. This also applies to shared libraries that applications may use, they are generally held in memory once and mapped to multiple processes. Quickly creating sandbox environments for development and testing purposes: containers that have been created and configured once can be archived as templates and can be duplicated (cloned) instantly on demand. After finishing the activity, the clone can safely be discarded. This allows to provide repeatable software builds and test environments, because the system will always be reset to its initial state for each run. Linux Containers also boot significantly faster than "classic" virtual machines, which can save a lot of time when running frequent build or test runs on applications. Safe execution of an individual application: if an application running inside a container has been compromised because of a security vulnerability, the host system and other containers remain unaffected. The potential damage can be minimized, analyzed and resolved directly from the host system. Note: Linux Containers on Oracle Linux 6 with the Unbreakable Enterprise Kernel Release 2 (2.6.39) are still marked as Technology Preview - their use is only recommended for testing and evaluation purposes. The Open-Source project "Linux Containers" (LXC) is driving the development of the technology behind this, which is based on the "Control Groups" (CGroups) and "Name Spaces" functionality of the Linux kernel. Oracle is actively involved in the Linux Containers development and contributes patches to the upstream LXC code base. Control Groups provide means to manage and monitor the allocation of resources for individual processes or process groups. Among other things, you can restrict the maximum amount of memory, CPU cycles as well as the disk and network throughput (in MB/s or IOP/s) that are available for an application. Name Spaces help to isolate process groups from each other, e.g. the visibility of other running processes or the exclusive access to a network device. It's also possible to restrict a process group's access and visibility of the entire file system hierarchy (similar to a classic "chroot" environment). CGroups and Name Spaces provide the foundation on which Linux containers are based on, but they can actually be used independently as well. A more detailed description of how Linux Containers can be created and managed on Oracle Linux will be explained in the second part of this article. Additional links related to Linux Containers: OTN Article: The Role of Oracle Solaris Zones and Linux Containers in a Virtualization Strategy Linux Containers on Wikipedia - Lenz Grimmer Follow me on: Personal Blog | Facebook | Twitter | Linux Blog |

    Read the article

  • Logical Domain Modeling Made Simple

    - by Knut Vatsendvik
    How can logical domain modeling be made simple and collaborative? Many non-technical end-users, managers and business domain experts find it difficult to understand the visual models offered by many UML tools. This creates trouble in capturing and verifying the information that goes into a logical domain model. The tools are also too advanced and complex for a non-technical user to learn and use. We have therefore, in our current project, ended up with using Confluence as tool for designing the logical domain model with the help of a few very useful plugins. Big thanks to Ole Nymoen and Per Spilling for their expertise in this field that made this posting possible. Confluence Plugins Here is a list of Confluence plugins used in this solution. Install these before trying out the macros used below. Plugin Description Copy Space Allows a space administrator to copy a space, including the pages within the space Metadata Supports adding metadata to Wiki pages Label Manages labeling of pages Linking Contains macros for linking to templates, the dashboard and other Table Enhances the table capability in Confluence Creating a Confluence Space First we need to create a new confluence space for the domain model. Click the link Create a Space located below the list of spaces on the Dashboard. Please contact your Confluence administrator is you do not have permissions to do this.   For illustrative purpose all attributes and entities in this posting are based on my imaginary project manager domain model. When a logical domain model is good enough for being implemented, do a copy of the Confluence Space (see Copy Space plugin). In this way you create a stable version of the logical domain model while further design can continue with the new copied space. Typical will the implementation phase result in a database design and/or a XSD schema design. Add Space Templates Go to the Home page of your Confluence Space. Navigate to the Browse drop-down menu and click on Advanced. Then click the Templates option in the left navigation panel. Click Add New Space Template to add the following three templates. Name: attribute {metadata-list} || Name | | || Type | | || Format | | || Description | | {metadata-list} {add-label:attribute} Name: primary-type {metadata-list} || Name | || || Type | || || Format | || || Description | || {metadata-list} {add-label:primary-type} Name: complex-type {metadata-list} || Name | || || Description |  || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | {add-label:complex-type,entity} The metadata-list macro (see Metadata plugin) will save a list of metadata values to the page. The add-label macro (see Label plugin) will automatically label the page. Primary Types Page Our first page to add will act as container for our primary types. Switch to Wiki markup when adding the following content to the page. | (+) {add-page:template=primary-type|parent=@self}Add new primary type{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|root=@self|pages=@descendents} Once the page is created, click the Add new primary type (create-page macro) to start creating a new pages. Here is an example of input to the LocalDate page. Embrace the LocalDate with square brackets [] to make the page linkable. Again switch to Wiki markup before editing. {metadata-list} || Name | [LocalDate] || || Type | Date || || Format | YYYY-MM-DD || || Description | Date in local time zone. YYYY = year, MM = month and DD = day || {metadata-list} {add-label:primary-type} The metadata-report macro will show a tabular report of all child pages.   Attributes Page The next page will act as container for all of our attributes. | (+) {add-page:template=attribute|parent=@self|title=attribute}Add new attribute{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|pages=@descendants} Here is an example of input to the startDate page. {metadata-list} || Name | [startDate] || || Type | [LocalDate] || || Format | {metadata-from:LocalDate|Format} || || Description | The projects start date || {metadata-list} {add-label:attribute} Using the metadata-from macro we fetch the text from the previously created LocalDate page. Complex Types Page The last page in this example shows how attributes can be combined together to form more complex types.   h3. Intro Overview of complex types in the domain model. | (+) {add-page:template=complex-type|parent=@self}Add a new complex type{add-page}\\ | {metadata-report:Name,Description|sort=Name|root=@self|pages=@descendents} Here is an example of input to the ProjectType page. {metadata-list} || Name | [ProjectType] || || Description | Represents a project || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [projectId] | {metadata-from:projectId|Type} | {metadata-from:projectId|Format} | {metadata-from:projectId|Description} | | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | | [description] | {metadata-from:description|Type} | {metadata-from:description|Format} | {metadata-from:description|Description} | | [startDate] | {metadata-from:startDate|Type} | {metadata-from:startDate|Format} | {metadata-from:startDate|Description} | {add-label:complex-type,entity} Gives us this Conclusion Using a web-based corporate Wiki like Confluence to create a logical domain model increases the collaboration between people with different roles in the enterprise. It’s my believe that this helps the domain model to be more accurate, and better documented. In our real project we have more pages than illustrated here to complete the documentation. We do also still use UML tools to create different types of diagrams that Confluence do not support. As a last tip, an ImageMap plugin can make those diagrams clickable when used in pages. Enjoy!

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • Geocaching - World wide treasure hunt

    I'm not quite sure how I came across this topic but actually I find it absolutely interesting, challenging and most of all a great fun for the family and friends. The interesting part is for sure that you can follow other peoples treasures and their preferred locations where a cache might be hidden. Of course, it wont be easy to find a cache after all. Sometimes there are even 'mystery caches' which have either riddles, further instructions or little brain games for you in order to find the actual cache - that's the challenge. And last but not least, those caches are hidden outdoor. A great experience to explore nature either on your own, or your family especially with children, or as a treasure hunting pack with a couple of friends. What is geocaching? It's a high-tech outdoor treasure hunting game that's a great way to explore the world with friends, family or on your own. Participants use GPS-enabled devices to locate hidden containers called geocaches. There are over one million geocaches hidden around the world today, waiting for you to find them. Visit Geocaching.com to search for geocaches near you.(Source: Referral Email of geocaching.com) Checkout the Geocaching 101 for further details and information. They also provide a video channel on YouTube. Which equipment do I need? Any GPS-enabled device is sufficient to go onto the hunt. I'm going to start our geocaching experience equipped with my Samsung Galaxy Tab. Additionally, I installed a geocaching.com client called c:geo that hopefully assists me soon. Combined with a map app like Google Maps and a nice Compass app you should be fully equipped and ready to go. I guess, that even a car navigation system is perfect for that task. Later on, with more experience and demand for technology (or precision) it might be interesting to opt-in for a pure GPS device, like a Garmin or any other brand on the market. {loadposition content_adsense} What is a geocache and what does it contain? In its simplest form, a cache always contains a logbook or logsheet for you to log your find. Larger caches may contain a logbook and any number of items. These items turn the adventure into a true treasure hunt. You never know what the cache owner or visitors to the cache may have left for you to enjoy. Remember, if you take something, leave something of equal or greater value in return. It is recommended that items in a cache be individually packaged in a clear, zipped plastic bag to protect them from the elements. Finding your first geocache Well, first you have to have interest to pick up the challenge. Then you have to check out the Geocache directory on geocaching.com. They have recommendations for beginner's caches but you are free to choose any. Actually, we have a Mystery Cache very close to our base, and I guess that we are going for that one on our first trip. Anyway, there is a very informative guide on the website which should answer all your questions about starting your new outdoor adventure. For sure, it's going to be rewarding. Team up with friends and family Especially as a beginner there might be misunderstandings in handling the GPS coordinates, the compass, or the map, and even finding the container at the documented position isn't easy in the first place. Luckily, there are logbook reports online from other hunters, and most of the time there are even 'spoiler' images available. But also bear in mind, that a geocache might have been removed or is lost due to unconscious people or whatever other reasons. Don't be disappointed in case that you can't find anything... There be nothing anymore. A general recommendation in this case would be to replace the missing container with a new one, and give feedback to the original owner about the state of that particular location. After all, it's about fun and active participation in a world-wide community. Geocaches in Mauritius? Yes, there are currently about 45 geocaches spread all over the island, and even a single in Rodriguez - that's gonna be a tough one. Hopefully, we will get increasing numbers as Geocaching.com allows, no better, even encourages you to hide new containers at your locations of choice. I think this is going to be real fun for us during the upcoming weeks and months. Especially, when we are travelling to other countries and transfer so-called trackable items between geocaches. On my first impression, Geocaching.com seems to be very mature, open and community-oriented. There are literally hundreds of thousands geocache 'hunters' all over the world. And usually finding a container remote from your home is very rewarding. I'll keep you updated in these matters during the next months to come...

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • Welcome to Jackstown

    - by fatherjack
    I live in a small town, the population count isn't that great but let me introduce you to some of the population. We'll start with Martin the Doc, he fixes up anything that gets poorly, so much so that he could be classed as the doctor, the vet and even the garage mechanic. He's got a reputation that he can fix anything and that hasn't been proved wrong yet. He's great friends with Brian (who gets called "Brains") the teacher who seems to have a sound understanding of any topic you care to pass his way. If he isn't sure he tells you and then goes to find out and comes back with a full answer real quick. Its good to have that sort of research capability close at hand. Brains is also great at encouraging anyone who needs a bit of support to get them up to speed and working on their jobs. Steve sees Brains regularly, that's because he is the librarian, he keeps all sorts of reading material and nowadays there's even video to watch about any topic you like. Steve keeps scouring all sorts of places to get the content that's needed and he keeps it in good order so that what ever is needed can be found quickly. He also has to make sure that old stuff gets marked as probably out of date so that anyone reading it wont get mislead. Over the road from him is Greg, he's the town crier. We don't have a newspaper here so Greg keeps us all informed of what's going on "out of town" - what new stuff we might make use of and what wont work in a small place like this. If we are interested he goes ahead and gets people in to demonstrate their products  and tell us about the details. Greg is pretty good at getting us discounts too. Now Greg's brother Ian works for the mayors office in the "waste management department" nowadays its all about the recycling but he still has to make sure that the stuff that cant be used any more gets disposed of properly. It depends on the type of waste he's dealing with that decides how it need to be treated and he has to know a lot about the different methods and when to use which ones. There are two people that keep the peace in town, Brent is the detective, investigating wrong doings and applying justice where necessary and Bart is the diplomat who smooths things over when any people have a dispute or disagreement. Brent is meticulous in his investigations and fair in the way he handles any situation he finds. Discretion is his byword. There's a rumour that Bart used to work for the United Nations but what ever his history there is no denying his ability to get apparently irreconcilable parties working together to their combined benefit. Someone who works closely with Bart is Brad, he is the translator in town. He has several languages that he can converse in but he can also explain things from someone's point of view or  and make it understandable to someone else. To keep things on the straight and narrow from a legal perspective is Ben the solicitor, making sure we all abide by the rules.Two people who make for an interesting evening's conversation if you get them together are Aaron and Grant, Aaron is the local planning inspector and Grant is an inventor of some reputation. Anything being constructed around here needs Aarons agreement. He's quite flexible in his rules though; if you can justify what you want to do with solid logic but he wont stand for any development going on without his inclusion. That gets a demolition notice and there's no argument. Grant as I mentioned is the inventor in town, if something can be improved or created then Grant is your man. He mainly works on his own but isnt averse to getting specific advice and assistance from specialist from out of town if they can help him finish his creations.There aren't too many people left for you to meet in the town, there's Rob, he's an ex professional sportsman. He played Hockey, Football, Cricket, you name it. He was in his element as goal keeper / wicket keeper and that shows in his personal life. He just goes about his business and people often don't even know that he's helped them. Really low profile, doesn't get any glory but saves people from lots of problems, even disasters on occasion. There goes Neil, he's a bit of an odd person, some people say he's gifted with special clairvoyant powers, personally I think he's got his ear to the ground and knows where to find out the important news as soon as its made public. Anyone getting a visit from Neil is best off to follow his advice though, he's usually spot on and you wont be caught by surprise if you follow his recommendations – wherever it comes from.Poor old Andrew is the last person to introduce you to. Andrew doesn't show himself too often but when he does it seems that people find a reason to blame him for their problems, whether he had anything to do with their predicament or not. In all honesty, without fail, and to his great credit, he takes it in good grace and never retaliates or gets annoyed when he's out and about.  It pays off too as its very often the case that those who were blaming him recently suddenly find they need his help and they readily forget the issues pretty rapidly.And then there's me, what do I do in town? Well, I'm just a DBA with a lot of hats. (Jackstown Pop. 1)

    Read the article

  • Wednesday at OpenWorld: Identity Management

    - by Tanu Sood
    Divide and conquer! Yes, divide and conquer today at Oracle OpenWorld with your colleagues to make the most of all things Identity Management since there’s a lot going on. Here’ the line-up for today: Wednesday, October 3, 2012 CON9458: End End-User-Managed Passwords and Increase Security with Oracle Enterprise Single Sign-On Plus 10:15 a.m. – 11:15 a.m., Moscone West 3008 Most customers have a broad variety of applications (internal, external, web, client server, host etc) and single sign-on systems that extend to some, but not all systems. This session will focus on how customers are using enterprise single sign-on can help extend single sign-on to virtually any application, without costly application modification while laying a foundation that will enable integration with a broader identity management platform. CON9494: Sun2Oracle: Identity Management Platform Transformation 11:45 a.m. – 12:45 p.m., Moscone West 3008 Sun customers are actively defining strategies for how they will modernize their identity deployments. Learn how customers like Avea and SuperValu are leveraging their Sun investment, evaluating areas of expansion/improvement and building momentum. CON9631: Entitlement-centric Access to SOA and Cloud Services 11:45 a.m. – 12:45 p.m., Marriott Marquis, Salon 7 How do you enforce that a junior trader can submit 10 trades/day, with a total value of $5M, if market volatility is low? How can hide sensitive patient information from clerical workers but make it visible to specialists as long as consent has been given or there is an emergency? In this session, Uberether and HerbaLife take the stage with Oracle to demonstrate how you can enforce such entitlements on a service not just within your intranet but also right at the perimeter. CON3957 - Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 11:45 a.m. – 12:45 p.m., Moscone West 3003 In this session, Virgin Media, the U.K.’s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. CON9493: Identity Management and the Cloud 1:15 p.m. – 2:15 p.m., Moscone West 3008 Security is the number one barrier to cloud service adoption.  Not so for industry leading companies like SaskTel, ConAgra foods and UPMC. This session will explore how these organizations are using Oracle Identity with cloud services and how some are offering identity management as a cloud service. CON9624: Real-Time External Authorization for Middleware, Applications, and Databases 3:30 p.m. – 4:30 p.m., Moscone West 3008 As organizations seek to grant access to broader and more diverse user populations, the importance of centrally defined and applied authorization policies become critical; both to identify who has access to what and to improve the end user experience.  This session will explore how customers are using attribute and role-based access to achieve these goals. CON9625: Taking Control of WebCenter Security 5:00 p.m. – 6:00 p.m., Moscone West 3008 Many organizations are extending WebCenter in a business to business scenario requiring secure identification and authorization of business partners and their users. Leveraging LADWP’s use case, this session will focus on how customers are leveraging, securing and providing access control to Oracle WebCenter portal and mobile solutions. EVENTS: Identity Management Customer Advisory Board 2:30 p.m. – 3:30 p.m., Four Seasons – Yerba Buena Room This invitation-only event is designed exclusively for Customer Advisory Board (CAB) members to provide product strategy and roadmap updates. Identity Management Meet & Greet Networking Event 3:30 p.m. – 4:30 p.m., Meeting Session 4:30 p.m. – 5:30 p.m., Cocktail Reception Yerba Buena Room, Four Seasons Hotel, 757 Market Street, San Francisco The CAB meeting will be immediately followed by an open Meet & Greet event hosted by Oracle Identity Management executives and product management team. Do take this opportunity to network with your peers and connect with the Identity Management customers. For a complete listing, refer to the Focus on Identity Management document. And as always, you can find us on @oracleidm on twitter and FaceBook. Use #oow and #idm to join in the conversation.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >