Search Results

Search found 5414 results on 217 pages for 'regular'.

Page 164/217 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How can I keep directories in sync

    - by Guillaume Boudreau
    I have a directory, dirA, that users can work in: they can create, modify, rename and delete files & sub-directores in dirA. I want to keep another directory, dirB, in sync with dirA. What I'd like, is a discussion on finding a working algorithm that would achieve the above, with the limitations listed below. Requirements: 1. Something asynchronous - I don't want to stop file operations in dirA while I work in dirB. 2. I can't assume that I can just blindly rsync dirA to dirB on regular interval - dirA could contain millions of files & directories, and terrabytes of data. Completely walking the dirA tree could take hours. Those two requirements makes this really difficult. Having it asynchronous means that when I start working on a specific file from dirA, it might have moved a lot since it appeared. And the second limitation means that I really need to watch dirA, and work on atomic file operations that I notice. Current (broken) implementation: 1. Log all file & directory operations in dirA. 2. Using a separate process, read that log, and 'repeat' all the logged operations in dirB. Why is it broken: echo 1 > dirA/file1 # Allow the 'log reader' process to create dirB/file1: log = "write dirA/file1"; action = cp dirA/file1 dirB/file1; result = OK echo 1 > dirA/file2 mv dirA/file1 dirA/file3 mv dirA/file2 dirA/file1 rm dirA/file3 # End result: file1 contains '1' # 'log reader' process starts working on the 4 above file operations: log = "write file2"; action = cp dirA/file2 dirB/file2; result = failed: there is no dirA/file2 log = "rename file1 file3"; action = mv dirB/file1 dirB/file3; result = OK log = "rename file2 file1"; action = mv dirB/file2 dirB/file1; result = failed: there is no dirB/file2 log = "delete file3"; action = rm dirB/file3; result = OK # End result in dirB: no more files! Another broken example: echo 1 > dirA/dir1/file1 mv dirA/dir1 dirA/dir2 # 'log reader' process starts working on the 2 above file operations: log = "write file1"; action = cp dirA/dir1/file1 dirB/dir1/file1; result = failed: there is no dirA/dir1/file1 log = "rename dir1 dir2"; action = mv dirB/dir1 dirB/dir2; result = failed: there is no dirA/dir1 # End result if dirB: nothing!

    Read the article

  • What if you could work on anything you wanted?

    - by Nick Harrison
    What if you could work on anything you wanted? Redgate is doing an experiment of sorts this week.  Called Down Tools Week.    The idea is that they stopped working on their regular projects for a week and strike out on something that catches their attention and drives their passion. Evidently in many cases, these projects have turned out to be new features in their existing products that individual were interested in, some were internal iniatives and some where evidently off the wall new ideas.   Today is show and tell where they will share with each other what they have been working on. There may well be some interesting announcements coming out of this.    The prospects are exciting. I understand that Google does something similar allowing their employees a specified amount of time to work on projects of their own choosing.    This has been the breeding ground for some of my favorite services. It is a shame that more companies do not follow such practices.   Now I know that most companies cannot afford to shut down everything for a week and sometimes you can't really explore an interesting idea in 8 hours a week or however much time Google allocates, but still it may be worth while. What would happen if your company gave you as an individual 1 week each quarter to work on a project of your own design and see what happens?   I would be happen if you still had to get approval for before your week long adventure. Personally, I think that this could be a very effective use of training budgets.   Give me a week to research something on my own and you would be amazed at what I can find out.    Maybe this should be the prerequisite before starting a new project.   Stagger the team onboarding but have everyone spend a week long sabbatical studying BizTalk before starting a project that will hinge on BizTalk. The show and tell afterwards is a great way to keep everyone honest or at least reassure management that everyone is honest.    If your goal was to spend a week researching and exploring a new technology and you had to do a show and tell afterwards to show off what you had learned, then everyone can learn a bit of what you just learned.     Sounds like a promising win win for me. Maybe it is a pipe dream, but what if .... What would you work on if given the opportunity to work on anything you wanted?

    Read the article

  • Stuff I learned at Innovate 2011

    - by David Dorf
    After returning from the NRF Innovate 2011 conference, I picked up few nuggets I thought I'd share here.  These thoughts are a bit random, but I hope they're useful nonetheless.Kevin Kelly opened the conference with six verbs that represent the future.  They were Screening, Interacting, Sharing, Accessing, Flowing, and Generating.  It struck me that these are all ways in which we merge the digital and physical worlds.  The internet of things continues to gain momentum.Some buzzwords:  deal economy, subscription commerce, discovery (instead of search), curationThat last one, curation, came up over and over.  Retailers, especially those in fashion, are finding value in helping their customers organize and present their own collections.  Social media has made sharing such collections easy, and mobile lets them take those ideas into the stores.  Mannequins are becoming less relevant.I heard from both HauteLook and Gilt Groupe (flash sale retailers) that a large percentage of their visits come from mobile devices, and most of those are iOS devices.  I find it interesting that even though Android has passed iPhone in units shipped (and will eventually pass iOS as a whole), its still the Apple crowd that leads the way.RadioShack mentioned their Holiday Heroes campaigned was very successful.  They asked their Foursquare users to check-in at a gym, coffee shop, and transportation hub as part of being a hero.  For this feat, customers were awarded a special badge that was worth 20% off at their next store visit. They claim a 3.5x increase in ticket size vs. regular check-in customers, and a 5x increase vs those that don't check-in at all.I also learned of RadioShack's #28 campaign, which is apparently one of the largest Twitter trends ever.  Their partnership with LIVESTRONG has gotten them followers, impressions, and credit for supporting the fight against cancer.The guys at Invodo showed the importance of video to e-commerce.  They gave compelling examples of how video can show customers the value of products better than just words.The highlight of the show was Guy Kawasaki's talk on innovation, which was not only informative but also peppered with humor and personality.  Back in the early days of the internet boom, Guy turned down the CEO position at Yahoo! because the commute was too long.  By his calculation, that was a $2B mistake.There are other good accounts of the conference at the NRF Blog.

    Read the article

  • Sometimes keyboard & touchpad work... sometimes not

    - by Voyagerfan5761
    When I first ran Ubuntu from CD on this Dell Inspiron 2650, it worked for about ten to fifteen minutes, then it hung (I was probably trying to do too much at once from a Live CD). The next time, my mouse and keyboard didn't work. I rebooted three times and finally got them working. I then installed Ubuntu alongside Windows XP. After installing, selecting the OS in GRUB worked, but my touchpad and keyboard were again not working. I rebooted, and they worked. (I fortunately had a USB mouse with which to reboot.) Booting Ubuntu and then rebooting to enable my keyboard and touchpad has become a routine ever since. Often several reboots are required; at one point I had to reboot over a dozen times in a row before getting a session where everything worked properly. (My installation has been in place for about three days a week now.) I've looked around for a device manager equivalent to no avail. Sometimes the hardware is properly detected, and sometimes it's not. Once or twice I've had the keyboard detected properly but the touchpad not. Plugging in my wireless card also sometimes requires a plug, unplug, and plug again to get it working. So is there some solution? I'm without an Internet connection at home, and this "laptop" is really a wall wart on my desk, so suggestions for packages may take a while to test. Xorg logs I captured two three four sample Xorg logs: one from a startup where the devices worked; one from when they didn't; one from a session where Ubuntu thought my touchpad was a normal mouse; and one from a session where my keyboard worked but the touchpad didn't. See this gist. Updated 2010-12-15 01:50 UTC with Xorg.0.log.keyboardonly file illustrating the case where the keyboard worked but not the touchpad. Updated 2011-01-11 04:10 UTC with Xorg.0.log.touchpadregmouse to illustrate a case where the touchpad was detected as a regular mouse (no "Touchpad" tab in mouse prefs).

    Read the article

  • ConsumeStructuredBuffer, what am I doing wrong?

    - by John
    I'm trying to implement the 3rd exercise in chapter 12 of Introduction to 3D Game Programming with DirectX 11, that is: Implement a Compute Shader to calculate the length of 64 vectors. Previous exercises ask you to do the same with typed buffers and regular structured buffers and I had no problems with them. For what I've read, [Consume|Append]StructuredBuffers are bound to the pipeline using UnorderedAccessViews (as long as they use the D3D11_BUFFER_UAV_FLAG_APPEND, and the buffers have both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_UNORDERED_ACCESS bind flags). Problem is: my AppendStructuredBuffer works, since I can append data to it and retrieve it from the application to write to a results file, but the ConsumeStructuredBuffer always returns zeroed data. Data is in the buffer, since if I change the UAV to a ShaderResourceView and to a StructuredBuffer in the HLSL side it works. I don't know what I am missing: Should I initialize the ConsumeStructuredBuffer on the GPU, or can I do it when I create the buffer (as I amb currently doing). Is it OK to bind the buffer with a UAV as described above? Do I need to bind it as a ShaderResourceView somehow? Maybe I am missing some step? This is the declaration of buffers in the Compute Shader: struct Data { float3 v; }; struct Result { float l; }; ConsumeStructuredBuffer<Data> gInput; AppendStructuredBuffer<Result> gOutput; And here the creation of the buffer and UAV for input data: D3D11_BUFFER_DESC inputDesc; inputDesc.Usage = D3D11_USAGE_DEFAULT; inputDesc.ByteWidth = sizeof(Data) * mNumElements; inputDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; inputDesc.CPUAccessFlags = 0; inputDesc.StructureByteStride = sizeof(Data); inputDesc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; D3D11_SUBRESOURCE_DATA vinitData; vinitData.pSysMem = &data[0]; HR(md3dDevice->CreateBuffer(&inputDesc, &vinitData, &mInputBuffer)); D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc; uavDesc.Format = DXGI_FORMAT_UNKNOWN; uavDesc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER; uavDesc.Buffer.FirstElement = 0; uavDesc.Buffer.Flags = D3D11_BUFFER_UAV_FLAG_APPEND; uavDesc.Buffer.NumElements = mNumElements; md3dDevice->CreateUnorderedAccessView(mInputBuffer, &uavDesc, &mInputUAV); Initial data is an array of Data structs, which contain a XMFLOAT3 with random data. I bind the UAV to the shader using the Effects framework: ID3DX11EffectUnorderedAccessViewVariable* Input = mFX->GetVariableByName("gInput")->AsUnorderedAccessView(); Input->SetUnorderedAccessView(uav); // uav is mInputUAV Any ideas? Thank you.

    Read the article

  • SOA, Cloud & Service Technology Symposium 2012 London

    - by JuergenKress
    Registration Is Now Open With Special Pricing For Oracle Promotional Discount For Exclusive Oracle Discount, Enter Promo Code: Djmxz370 OVERVIEW The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. Keynotes and Speakers Thomas Erl Arcitura Education "SOA, Cloud Computing & Semantic Web Technology: The Sequel - The Era of Intelligent Service Technology" Markus Zirn Oracle "Big Data with CEP and SOA" Clemens Utschig Boehringer Ingelheim Pharma Manas Deb Oracle "The Successful Execution of the SOA and BPM Vision Tim E. Hall Oracle "Community Management: The Next Wave of SOA Governance and API Management" Registration is Now Open with Special Pricing for Oracle Promotional discount for exclusive Oracle discount, Enter Promo Code: DJMXZ370. SPONSORSHIP OPPORTUNITIES The Symposium provides an excellent opportunity to promote your organization in the lead-up to the event, to delegates during the Symposium, and after when the proceedings are made available on the Symposium web site. There are a limited number of premier sponsorship packages available, and a package can be tailored to your needs and budget. Download the Symposium Sponsorship Guide. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Symposium,SOA Cloud Service Technology Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • WebLogic Server 11gR1 Interactive Quick Reference

    - by JuergenKress
    The WebLogic Server 11gR1 Administration interactive quick reference is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner. Each interactive type presents data that may be available in the documentation (in the case of Oracle products), but presents it in a way that is more intuitive and useful to a user of Oracle products because it displays data the way it is used in a real world, best practice scenario. For example, the architectural diagram interactive type provides an image of an architectural diagram that is typically larger than a single slide or paper. The image is scrollable and provides zoom capabilities to easily and clearly view any part of the image. The image itself contains a hotspot map that you can click to get more information about a feature, including reference links to the documentation in question. Linking the visual image of the component and where it fits in the overall architecture of the product, or technology in use, to the technical explanation and how-to materials related to that component is something not offered by the documentation. In a future release, the poster will also enable you to drill down even further into the individual subsystems in nested diagrams to look at the details of that subsystem. In short, the interactive posters are good at showing you the big picture, then quickly and easily getting you to the detailed information you need. In an instant, you can see where a technical component fits into an overall architecture, and zero in on the nitty-gritty details that show you how to do it yourself. Note: This is a first initial release with more features in development. Currently known information: Only Firefox 8.0 and higher is known to work with this product. This product may work with Chrome and Safari browsers, but is known to have issues in Internet Explorer at this time. Smartphones, such as iPads and iPhones, are partially supported WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: WebLogic server quick reference,weblogic overview,weblogic 12c,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Partner Webcast – Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise - August 28th 2014

    - by JuergenKress
    Thursday August 28th 2014 SOA Suite 12c Webcast The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges: » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Join this webcast to get an overview of what is in Java 8 from a business perspective and how with Java 8, you are uniquely positioned to extend innovation in your solutions through the largest, open, standards-based, community-driven platform. Oracle SOA Suite 12c Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through A unified toolset for the development of services and composite applications. A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments. Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Interent of Things (IoT) Summary - Q&A For details please visit our registration page here. Thursday, Aug 28th 2014 10am CET  (9am GMT / 11am EEST SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: SOA Suite 12c,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress,SOA

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • The Birth of SSAS Compare

    - by Red Gate Software BI Tools Team
    Noemi Moreno, Red Gate Business Intelligence Specialist Software vendors – even Microsoft – tend to forget about the needs of business intelligence developers. We are a rare and rather invisible species. For example, BIDS remained in VS 2008 until SQL Server 2012. It took until this release before we got something as simple as an “undo” function. Before I joined Red Gate as a BI specialist, I worked on SQL Development. I’ll never forget the time I discovered Red Gate’s SQL Compare tool and how it reduced the task of preparing a database release from a couple of days to ten minutes. When I moved to SSAS, MDX and cubes, I became frustrated with the deployment process because I couldn’t find a tool that made Cube releases as easy as they are with SQL Compare. This became my quest. I pitched the idea to a few people in Red Gate’s regular Down Tools Week, when everyone puts down their day-to-day tasks and works on their own projects. My task was to reason with a roomful of cynical developers, hardened to the blandishments of project managers, for help to develop a tool that would compare two different SSAS databases and create the script to process only the objects that needed processing, thereby reducing release time to only a few minutes. I walked to the podium and gave them the full story of the distressed BI specialists, doomed to spend tedious hours preparing deployment scripts. A few developers recovered from their torpor to cast a languid eye at my presentation. It wasn’t enough. In a sudden impulse, I blurted out a promise to perform a flamenco dance for just the team if the tool was able to successfully compare two SSAS databases and generate a script by the end of the week. I was lucky enough that some of them believed me and jumped in: David Pond (Dev), Matt Burton (Dev), Tilman Bregler (Dev), Shobana Sekar (Test), Ruchija Raj (Test), Nick Sutherland (Product Manager) and Irma Tanovic (BI). They didn’t know that Irma and I would be away on a conference in Amsterdam and would leave them without our support. But to my surprise, they had a working tool by the time we came back – basic, and with a few bugs, but a working tool nonetheless! Seeing it compare a very basic SSAS database, detect the changes and generate the scripts was amazing! Something that normally takes half a day was done in under a minute. Since then, a few months have passed and a BI Tools team has been created at Red Gate to work full time on BI tools for BI developers, starting with SSAS Compare. How cool is that? So download the free beta and give us your feedback. And the flamenco? I still need to deliver that. Tilman reminds me every day! I need to get the full flamenco costume.

    Read the article

  • 2D game - Missile shooting problem on Android

    - by Niksa
    Hello, I have to make a tank that sits still but moves his turret and shoots missiles. As this is my first Android application ever and I haven't done any game development either, I've come across a few problems... Now, I did the tank and the moving turret once I read the Android tutorial for the LunarLander sample code. So this code is based on the LunarLander code. But I'm having trouble doing the missile firing then SPACE button is being pressed. private void doDraw(Canvas canvas) { canvas.drawBitmap(backgroundImage, 0, 0, null); // draws the tank canvas.drawBitmap(tank, x_tank, y_tank, new Paint()); // draws and rotates the tank turret canvas.rotate((float) mHeading, (float) x_turret + mTurretWidth, y_turret); canvas.drawBitmap(turret, x_turret, y_turret, new Paint()); // draws the grenade that is a regular circle from ShapeDrawable class bullet.setBounds(x_bullet, y_bullet, x_bullet + width, y_bullet + height); bullet.draw(canvas); } UPDATE GAME method private void updateGame() throws InterruptedException { long now = System.currentTimeMillis(); if (mLastTime > now) return; double elapsed = (now - mLastTime) / 1000.0; mLastTime = now; // dUP and dDown, rotates the turret from 0 to 75 degrees. if (dUp) mHeading += 1 * (PHYS_SLEW_SEC * elapsed); if (mHeading >= 75) mHeading = 75; if (dDown) mHeading += (-1) * (PHYS_SLEW_SEC * elapsed); if (mHeading < 0) mHeading = 0; if (dSpace){ // missile Logic, a straight trajectorie for now x_bullet -= 1; y_bullet -= 1; //doesn't work, has to be updated every pixel or what? } boolean doKeyDown(int keyCode, KeyEvent msg) { boolean handled = false; synchronized (mSurfaceHolder) { if (keyCode == KeyEvent.KEYCODE_SPACE){ dSpace = true; handled = true; } if (keyCode == KeyEvent.KEYCODE_DPAD_UP){ dUp = true; handled = true; } if (keyCode == KeyEvent.KEYCODE_DPAD_DOWN){ dDown = true; handled = true; } return handled; } } } a method run that runs the game... public void run() { while (mRun) { Canvas c = null; try { c = mSurfaceHolder.lockCanvas(null); synchronized (mSurfaceHolder) { if (mMode == STATE_RUNNING) updateGame(); doDraw(c); } } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mSurfaceHolder.unlockCanvasAndPost(c); } } } } So the question would be, how do I make that the bullet is fired on a single SPACE key press from the turret to the end of the screen? Could you help me out here, I seem to be in the dark here... Thanks, Niksa

    Read the article

  • Survey: Your Plans for Adopting New Firefox Releases?

    - by Steven Chan (Oracle Development)
    Mozilla is committing to releasing new Firefox versions every six weeks.  Mozilla released Firefox 5 this week.  With this release, Mozilla states that Firefox 4 is End-of-Life and will not receive any additional security updates.  In a comment thread posted on to a Mike Kaply's blog article discussing these new Firefox policies, Asa Dotzler from Mozilla stated: ... Enterprise has never been (and I’ll argue, shouldn’t be) a focus of ours. Until we run out of people who don’t have sysadmins and enterprise deployment teams looking out for them, I can’t imagine why we’d focus at all on the kinds of environments you care so much about.  In a later comment, he added: ... A minute spent making a corporate user happy can better be spent making many regular users happy. I’d much rather Mozilla spending its limited resources looking out for the billions of users that don’t have enterprise support systems already taking care of them. Asa then confirmed that every new Firefox release will put the previous one into End-of-Life: As for John’s concern, “By the time I validate Firefox 5, what guarantee would I have that Firefox 5 won’t go EOL when Firefox 6 is released?” He has the opposite of guarantees that won’t happen. He has my promise that it will happen. Firefox 6 will be the EOL of Firefox 5. And Firefox 7 will be the EOL for Firefox 6.  He added: “You’re basically saying you don’t care about corporations.” Yes, I’m basically saying that I don’t care about making Firefox enterprise friendly. Kev Needham, Channel Manager at Mozilla later stated to PC Mag: The Web and Web browsers continue to evolve rapidly. Mozilla's focus is on providing users with the best Web experience possible, and Firefox needs to evolve at the pace the Web's users and developers expect. By releasing small, focused updates more often, we are able to deliver improved security and stability even as we introduce new features, which is better for our users, and for the Web.We recognize that this shift may not be compatible with a large organization's IT Policy and understand that it is challenging to organizations that have effort-intensive certification polices. However, our development process is geared toward delivering products that support the Web as it is today, while innovating and building future Web capabilities. Tying Firefox product development to an organizational process we do not control would make it difficult for us to continue to innovate for our users and the betterment of the Web.  Your feedback needed for E-Business Suite certifications  Mozilla's new support policy has significant implications for enterprise users of Firefox with Oracle E-Business Suite.  We are reviewing the implications for our certification and support policies for Firefox now.  It would be very helpful if you could let me know about your organisation's plans for Firefox in light of this new information.  Please feel free to drop me a private email, or post a comment here if that's appropriate. 

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • PASS Summit 2011: Save Money Now

    - by Bill Graziano
    Register by March 31st and save $200.  On April 1st we increase the price.  On July 1st we increase it again.  We have regular price bumps all the way through to the Summit.  You can save yourself $200 if you register by Thursday. In two years of marketing for PASS and a year of finance I’ve learned a fair bit about our pricing, why we do this and how you react to it.  Let me help you save some money! Price bumps drive registrations.  We see big spikes in the two weeks prior to a price increase.  Having a deadline with a cost attached is a great motivator to get people to take action. Registering early helps you and it helps PASS.  You get the exact same Summit at a cheaper rate.  PASS gets smoother cash flow and a better idea of how many people to expect.  We also get people that are already registered that will tell their friends about the conference. This tiered pricing lets us serve those that are very price conscious.  They can register early and take advantage of these discounts.  I know there are people that pay for this conference out of their own pockets.  This is a great way for those people to reduce the cost of the conference.  (And remember for next year that our cheapest pricing starts right after the Summit and usually goes up around the first of the year.) We also get big price bumps after we announce the program and the pre-conference sessions.  If you wrote down the 50 or so best known speakers in the SQL Server community I’m guessing we’ll have nearly all of them at the conference.  We did last year.  I expect we will this year too.  We’re going to have good sessions.  Why wait?  Register today. If you want to attend a pre-conference session you can always add it to your registration later.  Pre-con prices don’t change.  It’s very easy to update your registration and add a pre-conference session later. I want as many people as possible to attend the Summit.  It’s been a great experience for me and I hope it will be for you.  And if you are going to go, do yourself a favor and save some money.  Register today!

    Read the article

  • Java Cloud Service for developers

    - by JuergenKress
    The advent of cloud computing has reinvented application development for many companies. “That’s the beauty of the cloud,” says Cameron Purdy, vice president of development, Oracle. “It dramatically improves developer productivity because they can do what they do best without having to manage complex development, testing, staging, and production environments.” The key is to find a platform that doesn’t impose proprietary restrictions or force developers to learn new tools. For example, Oracle Java Cloud Service is an enterprise-grade platform as a service for building and deploying Java EE, Oracle WebLogic Server, and Oracle Application Development Framework (Oracle ADF) applications. “It’s designed to be flexible and easy to use,” says Purdy. “And it is also a standards-based solution -it’s not proprietary and there is no cloud lock-in. Developers get instant access to an enterprise-grade environment for a simple, monthly subscription.” Oracle Java Cloud Service instances are created with just a few clicks, so businesses can create a rich application development environment within minutes. Running on Oracle WebLogic Server and Oracle Exalogic, the underlying infrastructure also leverages Oracle Fusion Middleware’s integration with common services. For example, instances come integrated and preconfigured with optimized Oracle Database and Oracle Identity Management configurations. Based on Oracle Enterprise Manager, the Oracle Java Cloud Service console lets customers easily manage and monitor their Oracle Java Cloud Service instances. The open nature of the Oracle Java Cloud Service lets developers integrate through Web services such as SOAP and REST APIs, as well as use their favorite developer tools, whether they are out-of-the-box tools such as Maven and Ant or the productivity features built into Oracle JDeveloper, Oracle Enterprise Pack for Eclipse, or NetBeans IDE. The service allows for the seamless movement of applications between on-premise Oracle WebLogic Server domains and instances of Oracle Java Cloud Service within Oracle Cloud. This approach allows flexibility to mix and match the use of on-premise environments with cloud instances for development, test, and production environments. Visit to learn more and watch videos about Oracle Java Cloud Service. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: java,cloud,oracle cloud,java cloud,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Does the Ubuntu One sync work?

    - by bisi
    I have been on this for several hours now, trying to get a simple second folder to sync with my (paid) account. I cannot tell you how many times I removed all devices, removed stored passwords, killed all processes of u1, logged out and back in online...and still, the tick in the file browser (Synchronize this folder) is loading and loading and loading. Also, I have logged out, rebooted countless times. And this is after me somehow managing to get the u1 preferences to finally "connect" again. I have also checked the status of your services, and none are close to what I am experiencing. And I have checked the suggested related questions above! So please, just confirm whether it is a problem on my side, or a problem on your side. EDIT: In the mean time, here is what has changed, on top of what is mentioned just above. • My files went from 0MB to 71.9MB, and is still rising. • My first folder of 400.2MB is being filled with the data as I write this. The second folder has the folder sub-structure in place. • Both folders now show in the File Browser that it will be synchronized. I believe that right now, it is all back to normal and working fine, and I guess that's what a good night's sleep can do ;). And we're now only back to the point where synchronizing is slow, but will pick up with the release of Natty (https://wiki.ubuntu.com/UbuntuOne/FAQ/WhyIsItTakingSoLongForMyFilesToSync). But to get to the questions: My about says I use 11.04, Natty Narwhal, but I am quite sure the last distribution I installed was 10.10. Folder A is 400.2MB, and Folder B is 29.5MB I am on a DSL line, behind a regular fritz.box setting. No proxy servers in use, and I did not install any particular firewall features. No physical firewall, just the router (on which I have a TV signal as well), and 2 switches to get to this floor. Status: inactive The ubuntuone-indicator runs the same window as when I click on my name on the top-right corner and select Ubuntu one, or in the Control Center choose Ubuntu one. It wasn't supposed to go further than this was it?

    Read the article

  • Defining a service layer: the text-based adventure

    - by Stacy Vicknair
    Applications these days have more options than ever for a user interface, and it’s only going to grow. A successful product might require native applications for mobile devices, a regular web implementation, or even a gaming console. These systems often will be centralized and data driven. The solution is one that’s fairly solitary, a service layer! Simply put, take what’s shared and put it behind a physical or abstract layer that defines the boundary between the specific user interface and the shared content.   I know, I know, none of this is complicated. But some times it can be difficult to discern what belongs on which side of the line. For instance, say we’re creating a service that will provide content for both an ASP.NET MVC application and a WP7 application. Although the content served to each application is the same, there are different paradigms and patterns for displaying that data in the different environments. In ASP.NET MVC, you may create a model specific to a page that combines necessary information. In the WP7 application you might require different sets of data that you will connect via MVVM with the view. The general rule of thumb is that any shared content, business rules, or data should exist separately. Any element that is specific to the current UI implementation should be included in a separate library or with the UI implementation itself. The WP7 application doesn’t need my MVC specific model classes. My MVC application doesn’t require those INotifyPropertyChanged viewmodels that the WP7 application depends on. In both cases, there should be additional processing done above the service layer to massage the data to the application’s specific needs.   Service-ocalypse: the text based adventure What helps me the most about deciding whether or not something belongs coupled to the UI implementation or in the shared implementation is thinking of the simplest implementation you could have: a console application. You might have played a game like Peasant’s Quest: The console app is the text based adventure game version of your application. If you’re service was consumed in its simplest form, you would simply have a console based API for it that issues requests. Maybe those requests aren’t SWIM TO BOAT, but they might be CREATE USER JOHN. If I issue a request, I expect that request to be issued to the service. If the service has any exceptions or issues with my input, that business logic should be encapsulated in that service, not implemented in the UI. The service layer should be your functional application in its entirety, and anything above that layer should only assist with the display of that information.

    Read the article

  • Why can't the IT industry deliver large, faultless projects quickly as in other industries?

    - by MainMa
    After watching National Geographic's MegaStructures series, I was surprised how fast large projects are completed. Once the preliminary work (design, specifications, etc.) is done on paper, the realization itself of huge projects take just a few years or sometimes a few months. For example, Airbus A380 "formally launched on Dec. 19, 2000", and "in the Early March, 2005", the aircraft was already tested. The same goes for huge oil tankers, skyscrapers, etc. Comparing this to the delays in software industry, I can't help wondering why most IT projects are so slow, or more precisely, why they cannot be as fast and faultless, at the same scale, given enough people? Projects such as the Airbus A380 present both: Major unforeseen risks: while this is not the first aircraft built, it still pushes the limits if the technology and things which worked well for smaller airliners may not work for the larger one due to physical constraints; in the same way, new technologies are used which were not used yet, because for example they were not available in 1969 when Boeing 747 was done. Risks related to human resources and management in general: people quitting in the middle of the project, inability to reach a person because she's on vacation, ordinary human errors, etc. With those risks, people still achieve projects like those large airliners in a very short period of time, and despite the delivery delays, those projects are still hugely successful and of a high quality. When it comes to software development, the projects are hardly as large and complicated as an airliner (both technically and in terms of management), and have slightly less unforeseen risks from the real world. Still, most IT projects are slow and late, and adding more developers to the project is not a solution (going from a team of ten developer to two thousand will sometimes allow to deliver the project faster, sometimes not, and sometimes will only harm the project and increase the risk of not finishing it at all). Those which are still delivered may often contain a lot of bugs, requiring consecutive service packs and regular updates (imagine "installing updates" on every Airbus A380 twice per week to patch the bugs in the original product and prevent the aircraft from crashing). How can such differences be explained? Is it due exclusively to the fact that software development industry is too young to be able to manage thousands of people on a single project in order to deliver large scale, nearly faultless products very fast?

    Read the article

  • Is it OK to repeat code for unit tests?

    - by Pete
    I wrote some sorting algorithms for a class assignment and I also wrote a few tests to make sure the algorithms were implemented correctly. My tests are only like 10 lines long and there are 3 of them but only 1 line changes between the 3 so there is a lot of repeated code. Is it better to refactor this code into another method that is then called from each test? Wouldn't I then need to write another test to test the refactoring? Some of the variables can even be moved up to the class level. Should testing classes and methods follow the same rules as regular classes/methods? Here's an example: [TestMethod] public void MergeSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for(int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); MergeSort merge = new MergeSort(); merge.mergeSort(a, 0, a.Length - 1); CollectionAssert.AreEqual(a, b); } [TestMethod] public void InsertionSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for (int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); InsertionSort merge = new InsertionSort(); merge.insertionSort(a); CollectionAssert.AreEqual(a, b); }

    Read the article

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >