Search Results

Search found 363 results on 15 pages for 'jean bernard pellerin'.

Page 7/15 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Conventional Parallel Inserts do Exist in Oracle 11

    - by jean-pierre.dijcks
    Had an interesting chat with Greg about said topic and searching showed the following link to discuss this topic in some detail (no reason for me to repeat this). insert /*+ noappend parallel(t1) */ into t1 select /*+ parallel(t2) */ * from t2 generates a load table conventional and does give you a parallel insert without doing a direct path insert. As this is missing from the official documentation it is probably something few people actually know existed, so kudos to Randolf Geist.

    Read the article

  • Upgrade failed, now impossible to restart

    - by Jean Claude Dispaux
    I have an Aspire One with Ubuntu, that I use only when traveling, i.e. seldom. Yesterday I tried to start it, it informed me that I had to install a new release of Ubuntu. The download went fine, then I left it for the night. In the morning I found error messages. I tried to restart, but nothing works any longer. The only backup I have is two USB keys made by the person who installed Ubuntu, that say Recovery Ubuntu 8 and Ubuntu 9.10 respectively. Right now I plugged the "8", selected F12 and instructed to boot from the USB key. It has been running for an hour, the screen still says ubuntu, the USB key flashes red. By the way, I have no precious data on this machine, I do not care about losing data. Please advise on what to do now. Thanks.

    Read the article

  • Dependancies not met while instaling digikam 2.9

    - by Jean-Mi
    I'm trying to install digikam 2.9 from ttp://ppa.launchpad.net/philip5/kubuntu-backports/ubuntu here is the message I got: Les paquets suivants ont des dépendances non-satisfaites : digikam: Depends: libkdecore5 (>= 4:4.7.0) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkdeui5 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkfile4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkhtml5 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkio5 (>= 4:4.7.0) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkipi9 (>= 4:4.8.80) mais la version 4:4.8.4d-precise~ppa1 va être installée Depends: libknotifyconfig4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libkparts4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libnepomuk4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: libphonon4 (>= 4:4.2.0) mais la version 4:4.7.0really4.6.0-0ubuntu1 va être installée Depends: libqt4-dbus (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqt4-qt3support (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqt4-sql (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqt4-xml (>= 4:4.5.3) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqtcore4 (>= 4:4.8.0) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libqtgui4 (>= 4:4.8.0) mais la version 4:4.8.1-0ubuntu4.2 va être installée Depends: libsolid4 (>= 4:4.7) mais la version 4:4.8.5-0ubuntu0.1 va être installée Depends: digikam-data (= 4:2.9.0-precise~ppa1kde49) mais la version 4:2.9.0-precise~pp Can someone help ? JM

    Read the article

  • Simple Architecture Verification

    - by Jean Carlos Suárez Marranzini
    I just made an architecture for an application with the function of scoring, saving and loading tennis games. The architecture has 2 kinds of elements: components & layers. Components: Standalone elements that can be consumed by other components or by layers. They might also consume functionality from the model/bottom layer. Layers: Software components whose functionality rests on previous layers (except for the model layer). -Layers: -Models: Data and it's behavior. -Controllers: A layer that allows interaction between the views and the models. -Views: The presentation layer for interacting with the user. -Components: -Persistence: Makes sure the game data can be stored away for later retrieval. -Time Machine: Records changes in the game through time so it's possible to navigate the game back and forth. -Settings: Contains the settings that determine how some of the game logic will apply. -Game Engine: Contains all the game logic, which it applies to the game data to determine the path the game should take. This is an image of the architecture (I don't have enough rep to post images): http://i49.tinypic.com/35lt5a9.png The requierements which this architecture should satisfy are the following: Save & load games. Move through game history and see how the scoreboard changes as the game evolves. Tie-breaks must be properly managed. Games must be classified by hit-type. Every point can be modified. Match name and player names must be stored. Game logic must be configurable by the user. I would really appreciate any kind of advice or comments on this architecture. To see if it is well built and makes sense as a whole. I took the idea from this link. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

    Read the article

  • Winner of the 2012 Government Big Data Solutions Award

    - by Jean-Pierre Dijcks
    Hot off the press: The winner of the 2012 Government Big Data Solutions Aware is the National Cancer Institute!! Read all the details on CTOLabs.com. A short excerpt to wet your appetite: "... This solution, based on the Oracle Big Data Appliance with the Cloudera Distribution of Apache Hadoop (CDH), leverages capabilities available from the Big Data community today in pioneering ways that can serve a broad range of researchers. The promising approach of this solution is repeatable across many other Big Data challenges for bioinfomatics, making this approach worthy of its selection as the 2012 Government Big Data Solution Award." Read the entire post. Congrats to the entire team!!

    Read the article

  • undefined control sequence in a NOWEB document

    - by Jean Baldraque
    I'm writing a TeX-noweb document. I compile it with noweave -tex -filter "elide comment:*" texcode.nw > documentation.tex but when I try to compile the resulting file with xetex -halt-on-error documentation.tex I obtain the following error message ! Undefined control sequence. <argument> ...on}\endmoddef \nwstartdeflinemarkup \nwenddeflinemarkup It seems that \nwenddeflinemarkup is not recognized. If i delete from the document all the sequences \nwstartdeflinemarkup\nwenddeflinemarkup the document compile without exceptions. What can be the problem?

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • Delete button in Ubuntu One Notes

    - by Jean MC13
    For the whishlist... The 'delete button' in Ubuntu One Notes is exactly in the same place as the 'save' button. So I saved a long note, but as my finger clicked two times on the save button, the second click was on the 'delete' one ! Lost without return possible. <:-(( This annoying feature could easily be changed : - put the delete button in another place - before deleting : ask for confirmation - offer a way to cancel ('undelete' function)

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Speak now! Call for Papers at Oracle Openworld is now open

    - by Jean-Pierre Dijcks
    Present Your Thoughts to Thousands of Oracle Customers, Developers, and Partners Do you have an idea that could improve best practices? A real-world experience that could shed new light on IT? The Oracle OpenWorld call for papers is now open. This is your opportunity to speak your mind to the world’s largest gathering of the most-knowledgeable IT decision-makers, leading-edge developers, and advanced technologists. So take a look at our criteria and join us at Oracle OpenWorld. We look forward to hearing from you. Register Early and Save See and learn about the newest products. Meet experts and business leaders. All for less. Register for Oracle OpenWorld before March 30, 2012, and you’ll save up to US$800 off the registration. Register now.

    Read the article

  • Two interesting big data sessions around Openworld

    - by Jean-Pierre Dijcks
    For those who want to talk (not listen) about big data, here are 2 very cool sessions: BOF9877 - A birds of a feather session around all things big data. It is on Monday, Oct 1, 6:15 PM - 7:00 PM - Marriott Marquis - Golden Gate. While all guests on the panel are special, we will have very special guest on the panel. He is a proud owner of a Big Data Appliance (see here). Then there is a Big Data SIG meeting (the invite from Gwen): I'd like to invite everyone to our OOW12 meet up. We'll meet on Tuesday, October 2nd, 8:45 to 9:45 at Moscone West Level 3, Overlook 3. We will network, socialize and discuss plans for the group. Which topics interest us for webinars? Which conferences do we want to meet in? What other activities we are interested in? We can also discuss big data topics, show off our great work, and seek advice on the challenges. Other than figuring out what we are collectively interested in, the discussion will be pretty open. Here is the official invite. See you at Openworld!!

    Read the article

  • Call for Paper: Oracle OpenWorld 2011

    - by jean-pierre.dijcks
    OpenWorld 2011 is now open for the public to submit session proposals. We would like to encourage our customers, and partners to participate in this ‘call for papers” (CFP) process. CFP for the general public, non-Oracle employee submitters, closes on March 27, 2011. Here are the details: Conference Location: Moscone Convention Center, San Francisco, CA. Conference Date: Sunday - Thursday, October 2 - 6, 2011 Conference Website: http://www.oracle.com/us/openworld CFP Website: https://oracleus.wingateweb.com/portal/cfp/ Paper submission key dates: Deliverables Due Dates Call for Papers Begins Wednesday, March 9 Call for Papers Ends Sunday, March 27 – 11:59 pm PDT Notifications for Accepted and Declined Submissions Sent End of May Questions regarding the Call for Papers, send an email to [email protected]

    Read the article

  • Read how a customer uses Oracle NoSQL Database

    - by Jean-Pierre Dijcks
    For those who have had the pleasure to be in SF for Oracle Openworld, you might have seen or heard about this story already. If you did not, here is a great story on how to use Oracle NoSQL Database. Apart from all the cool technology, I'm just excited that this is a company founded by a football international and dealing with sports data, games and other cool things. Like an all things cool combo in one place.

    Read the article

  • Using R on your Oracle Data Warehouse

    - by jean-pierre.dijcks
    Since it is Predictive Analytics World in our backyard (or are we San Francisco’s backyard…?) I figured it is well worth the time to dust of some old but important news. With big data (should we start calling it “any data analytics” instead?) being the buzz word and analytics the key operative goal, not moving data around is becoming more and more critical to the business users. Why? Because instead of spending time on moving data around into your next analytics server you should be running analytics on those CPUs. You could always do this with Oracle Data Mining within the Oracle Database. But a lot of folks want to leverage R as their main tool. Well, this article describes how you can do this, since 2010… As Casimir Saternos concludes in the article; “There is a growing awareness of the need to effectively analyze astronomical amounts of data, much of which is stored in Oracle databases. Statistics and modeling techniques are used to improve a wide variety of business functions. ODM accessed using the R language increases the value of your data by uncovering additional information. RODM is a powerful tool to enable your organization to make predictions, classify data, and create visualizations that maximize effectiveness and efficiencies.” Happy Analysis!

    Read the article

  • E-Book on big data (featuring Analysts, Customers and more)

    - by Jean-Pierre Dijcks
    As we are gearing up for Openworld, here is a nice E-book on big data to start paging through. It contains Gartner's take on big data, customer and partner interviews and a lot more good info. Enjoy the read so you come prepared for Openworld!! Read the E-Book here. For those coming to Oracle Openworld (or the Americas Cup races around the same time), you can find big data sessions via this URL. Enjoy!!

    Read the article

  • Big Data Sessions at Openworld 2012

    - by Jean-Pierre Dijcks
    If you are coming to San Francisco, and you are interested in all the aspects to big data, this Focus On Big Data is a must have document.  Some (other) highlights: A performance demo of a full rack Big Data Appliance in the engineered systems showcase A set of handson labs on how to go from a NoSQL DB to an effective analytics play on big data Much, much more See you all in a few weeks in SF!

    Read the article

  • multi-thread in mmorpg server

    - by jean
    For MMORPG, there is a tick function to update every object's state in a map. The function was triggered by a timer in fixed interval. So each map's update can be dispatch to different thread. At other side, server handle player incoming package have its own threads also: I/O threads. Generally, the handler of the corresponding incoming package run in I/O threads. So there is a problem: thread synchronization. I have consider two methods: Synchronize with mutex. I/O thread lock a mutex before execute handler function and map thread lock same mutex before it execute map's update. Execute all handler functions in map's thread, I/O thread only queue the incoming handler and let map thread to pop the queue then call handler function. These two have a disadvantage: delay. For method 1, if the map's tick function is running, then all clients' request need to waiting the lock release. For method 2, if map's tick function is running, all clients' request need to waiting for next tick to be handle. Of course, there is another method: add lock to functions that use data which will be accessed both in I/O thread & map thread. But this is hard to maintain and easy to goes incorrect. It needs carefully check all variables whether or not accessed by both two kinds thread. My problem is: is there better way to do this? Notice that I said map is logic concept means no interactions can happen between two map except transport. I/O thread means thread in 3rd part network lib which used to handle client request.

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Importing tab based outline txt file into Word outline

    - by Bernard Vander Beken
    Given a text file containing and outline with tabs to indent each level, I would like to import this into a Word 2007 document so that the each indentation level is converted to a H1, H2, etc heading level. I tried copy pasting the text into the outline view and opening the text file via File, Open. Both did not give the expected result. Level 1 Level 2 Level 3 Level 1 Level 2 Note: I am using spaces instead of tabs to indent this sample.

    Read the article

  • QoS / PBR Routing Questions

    - by Bernard
    I have a 50Mbs Satellite link and a 10Mbs Microwave link supplying a very remote location. Behind these links, I have a 6,400 seat network - with about 3,000 signed in at any one time. My goal is to send all of the Voip traffic (Google Chat, Magic Jack, Skype, Speakeasy, Vonage, Vonage PC, Yahoo) through the microwave link which has 100ms latency. The rest of the traffic can utilize any remaining bandwidth of the microwave link with excess being diverted to the higher latency (600ms) satellite connection. The problem I've had so far is that most automatic routing configurations weigh the bandwidth heavily for preference - and I'm only wanting latency considered. Additionally, I don't know if this can even be handled with the routing hardware I have at my disposal (Cisco 3640, 3745, & 3845). Any recommendations (or really good starting points) would be greatly appreciated.

    Read the article

  • Choosing the right email server.

    - by bernard-productions
    My company is switching email servers and I wanted more input from people who know more about IT. We are a small company that would only have 7-8 users. We use MS outlook and need a compatible solution. Im not sure what server we are switching from but i have been looking at google apps. Exchange is a little more expensive but we are considering that as well. Any suggestions/ recommendations regarding these two or others????

    Read the article

  • How do I change the format of group by data in Excel 2003 pivot tables

    - by Bernard
    I have a data that lists the value of a contract per contract number. So I've created a pivot table that counts the number of contracts valued at 0 - 10, 11 - 20, etc. I want to be able to format the group, e.g. $0 - $10, $11 - $20 I've tried formatting the underlying data as currency and formatting the column in the pivot table, but it still shows as 0 - 10, 11 - 20 Also I have a column in the pivot table that says Total which is the Count of the number of contracts in that range, i.e. Value Total 0 - 10 1 11 - 20 1 Its an autogenerated column heading that Excel put in. How do I change this to say Number of contracts. I want it changed because when I chart the pivot table, the series is called called Total :(

    Read the article

  • 'Unsaved Theme' is not appearing when I make a change to a theme in Windows 7

    - by Bernard
    According to personal experience and Microsoft documentation, when an aspect of a theme (wallpaper, border colour, etc) is changed in the 'Personalization' screen of Windows 7, a theme called Unsaved Theme is supposed to be created/exist under the 'My Themes' heading. However, my 'My Themes' section is always empty, and if I change the background by clicking on the Background link at the bottom of the Personalization page, the background changes, but it still shows 'Harmony' as the selected background, and no 'Unsaved Theme' is created. I have no idea how long this has been happening, but I would like to create some themes, but it never acknowledges my changes as it is supposed to.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >