Search Results

Search found 23182 results on 928 pages for 'worst case'.

Page 402/928 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Oracle Database 12c By Example – SQL Developer and Multitenant

    - by thatjeffsmith
    As you may have heard, Oracle Database 12c is now available. In addition to the binaries and docs going out, we also published a few new Oracle By Example (OBE) chapters. You can find those links here on our product page. Do you know who found these, practically the minute they were published? An enterprising DBA-extraordinaire who was just happening to be presenting at the ODTUG KScope13 conference in New Orleans. He thought it would be a good idea to download the new software over a hotel WIFI, install and create a new multitenant database, watch a few OBEs, and then demo that live for his ‘SQL Developer for DBAs‘ session. Pretty crazy, right? Well, he did it, and I was there to watch. Way cool. You can listen to @leight0nn tell his story in his own words via this ODTUG interview with @oraclenered. In case you’re too giddy to sit through the video, I’ll give you a preview – he succesfully cloned a pluggable database in about a minute with only a couple of clicks using Oracle SQL Developer 3.2.20.09 while connected to a 12c database.

    Read the article

  • Won't boot after installing Ubuntu 12.04 sucessfully

    - by Matt
    I installed 12.04 successfully and rebooted (I took out my installation CD), and selected the newly installed Linux partition to boot from rEFIt. Then it just comes up with this error message: Error loading operating system which could not be more vague. Take that back. I guess it could say just "error." I don't even get to the boot prompt which limits what I can do. I cannot boot into rescue mode. I tried boot-repair, but it took more than 24 hours to check the system configuration, so I gave up on that. I'm running a Mac Mini with its main OS being Mac OS X 10.5.8. I have an alternate OS Windows XP installed, which was virtually destroyed by this Linux installation. I sacrificed my working, speedy Windows partition for something that won't even boot up. What was I thinking. My Mac partition is slow as crap. I've tried installing 12.04 many times with two different disks. The first time, I had one partition for Linux, then I had 2 (swap+main), then 3 (swap, main and BIOS), then 4 which is what I have now (swap, main, BIOS, and boot/grub). The only way I could get through the install without GRUB giving up was if I created a separate partition for it. Which was pointless, because it did install successfully, but it still doesn't boot up at all. Could rEFIt be booting off of the BIOS or one of the other partitions? Because if that's the case, there is no alternative, because Mac itself without rEFIt refuses to recognize a Linux ext4 (or 2 or 3) format partition. Apple always has to make everything so difficult. If I'm not mistaken, rEFIt is the only application of its kind for Mac. I can boot off of the CD back to the install/try screen. This is extremely upsetting, can you guys help? Please?

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Startup/Shutdown time in Xubuntu is increasing!

    - by Ankit
    I am a novice Xubuntu user on a dual-boot machine. The other OS I have is Windows 7. When I first began using Xubuntu, I had really fast startup and shutdown (much much faster than Windows 7 :) ). However, as I started using it more and more for my work, these times started rising. I do not have any problems with execution speed of running applications. My main concern is the shutdown time. Now it has gone above Windows shutdown time [startup time has only partially increase compared to shutdown]. I checked some similar questions like this. However, they seem to not answer my concern as I feel that the concerned users there experience a long wait before the screen goes blue. In my case, the screen goes blue (desktop session ends a blue screen with a moving slider appears) pretty fast. However, it remains blue for a long time. Another answer that I saw on google was to use dmesg and then stopping some services that I do not want. However, me being a novice could not completely understand what it meant

    Read the article

  • Time Machine for Windows

    - by Kevin L.
    A simple Google search for "Time Machine for Windows" results in a flurry of different little apps. But instead of relying on forum anecdotes and advertisements, I call on the much wiser Super User beta community for some depth on this one. Having Time Machine running on Leopard is like a warm, fuzzy blanket of comfort that I never got with RAID, rsync, or SyncToy on Windows. I'm not asking the community what the "best" backup software for Windows is, but instead: Is there any true Time Machine clone for Windows, one that includes as many of the following as possible: Completely transparent, "set-it-and-forget-it" backup Incremental backups (changes only) for every hour for a day, every day for a month, and every week until the backup disk is full Ability to rebuild from this backup disk in case of main drive meltdown (the backup doesn't have to be bootable; neither are Time Machine disks) Extremely easy to use UI (target user == wife). Bonus points for a beautiful UI

    Read the article

  • Simultaneous AI in turn based games

    - by Eduard Strehlau
    I want to hack together a roguelike. Now I thought about entity and world representation and got to a quite big problem. If you want all the AI to act simultaneously you would normally(in cellular automa for examble) just copy the cell buffer and let all action of indiviual cells depend on the copy. Actions which are not valid anymore after some cell before the cell you are currently operating on changed the original enviourment(blocking the path) are just ignored or reapplied with the "current"(between turns) environment. After all cells have acted you copy the current map to the buffer again. Now for an environment with complex AI and big(datawise) entities the copying would take too long. So I thought you could put every action and entity makes into a que(make no changes to the environment) and execute the whole que after everyone took their move. Every interaction on this que are realy interacting entities, so if a entity tries to attack another entity it sends a message to it, the consequences of the attack would be visible next turn, either by just examining the entity or asking the entity for data. This would remove problems like what happens if an entity dies middle in the cue but got actions or is messaged later on(all messages would go to null, and the messages from the entity would either just be sent or deleted(haven't decided yet) But what would happen if a monster spawns a fireball which by itself tracks the player(in the same turn). Should I add the fireball to the enviourment beforehand, so make a change to the environment before executing the action list or just add the ball to the "need updated" list as a special case so it doesn't exist in the environment and still operates on it, spawing after evaluating the action list? Are there any solutions or papers on this subject which I can take a look at? EDIT: I don't need information on writing a roguelike I need information on turn based ai in respective to a complex enviourment.

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • Ubuntu and racadm

    - by lmqcn
    I recently purchased a used poweredge 1850 server and it came with a DRAC card. After wiping the HDD and installing ubuntu server 12.04.3 LTS amd64 on it, I am now trying to gain access to the DRAC which I believe is version 4. I have properly configured the DRAC to use it's own IP on my LAN and when I point my browser to the IP address, I am greeted with the DRAC login page (it has the dell logo and everything). However, after trying the credentials of root/calvin, I was denied access. So I think that the previous owners had set their own password. After doing some reading, it appears that I can reset the credentials to the default using racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword but upon entering the command, I get this error: bash: /usr/sbin/racadm: No such file or directory This holds true even if I run sudo su prior to running the racadm command. If, however, I run sudo racadm config -g cfgUserAdmin -o cfgUserAdminPassword -i 1 newpassword there are no errors. Yet, when I try to log into the DRAC via the web interface using the credentials of root/newpassword I am still not granted access. I installed the dell utilities via the guide at https://wiki.ubuntu.com/HardwareSupportMachinesServersDellNotes. I first tried to install the 64 bit version that is on the dell repositories, but after that was unsuccessful, I just followed the guide verbatim. No errors were produced in either case. I even followed the information at the bottom of the guide by executing sudo pppd /dev/ttyS1 1382400 crtscts noipdefault noauth lock persist connect 'chat -v "" CLIENT CLIENTSERVER "\\c"' but obviously, replacing the /dev/ttyS1 with the correct information for my system. ls -l /usr/sbin/ | grep racadm yields -rwxr-xr-x 1 root root 87930 Sep 16 04:03 racadm I have tried these credentials after each attempt of changing the password: root/calvin root/newpassword admin/calvin admin/newpassword All have been unsuccessful. What is the next course of action that I should take?

    Read the article

  • Bluetooth adapter turned from working fine to unrecognized

    - by easoncxz
    i had been using bluetooth fine, with devices working, but today when i turned on my computer again bluetooth strangely failed. there is a bluetooth icon on the top bar, showing "bluetooth on", but if i click on the "bluetooth settings" item, a system settings window shows up and shows me a bluetooth on-off switch which is disabled (i.e. fixed to off). more information about my case: i am a new linux used, coming from windows, and do not know supposedly-obvious commands. i am using a laptop. it initially doesn't have bluetooth. i bought a built-in type (instead of USB type) bluetooth module, and added it inside the laptop. hence, i do not have a specific FN+* key for bluetooth. in windows, i needed to install an additional driver that was intended for other machines in my laptop's seires which have built-in (i.e. factoryly built-in)j bluetooth modules. the Fn+* key seemed to only affect wifi under ubuntu. i have been successfully using magicmouse with my later-added built-in bluetooth module/adapter on both windows and ubuntu i have been trying to tweak the magicmouse scrolling speed with commands rmmod something, modprobe hid_magicmouse --scroll_speed=45 --scroll_acceleration=30 or something, then added a file `/etc/modprobe.d/magicmouse.conf". the mouse seemed to be working fine with these changes. now if i run commands like hcitool dev, the shell tells me that i do not have any "Devices" or "adapters". i seem to have bluez installed, because when i type "blue" then tab-autocomplete, a bunch of commands like bluez-test-device pops up. -- update -- some commands and their results: easoncxz@eason-Aspire-4741-ubuntu:/etc$ hcitool dev Devices: easoncxz@eason-Aspire-4741-ubuntu:/etc$ hcitool scan Device is not available: No such device easoncxz@eason-Aspire-4741-ubuntu:/etc$ rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: no Hard blocked: no easoncxz@eason-Aspire-4741-ubuntu:/etc$ rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no

    Read the article

  • Enigmail - how to encrypt only part of the message?

    - by Lukasz Zaroda
    When I confirmed my OpenPGP key on launchpad I got a mail from them, that was only partially encrypted with my key (only few paragraphs inside the message). Is it possible to encrypt only chosen part of the message with Enigmail? Or what would be the easiest way to accomplish it? Added #1: I found a pretty convenient way for producing ASCII armoured encrypted messages by using Nautilus interface (useful for ones that for some reason doesn't like to work with terminal). You need to install Nautilus-Actions Configuration Tool, and add there a script with a name eg. "Encrypt in ASCII" and parameters: path: gpg parameters: --batch -sear %x %f The trick is that now you can create file, with extension that would be name of your recipient, you can then fill it with your message, right click it in Nautilus, choose "Encrypt in ASCII", and you will have encrypted ascii file which content you can (probably) just copy to your message. But if anybody knows more convenient solution please share it. Added #1B: In the above case if you care more about security of your messages, It's worth to turning off invisible backup files that gedit creates every time, you create new document, or just remember to delete them.

    Read the article

  • Using an Apt Repository for Paid Software Updates

    - by Scott Warren
    I'm trying to determine a way to distribute software updates for a hosted/on-site web application that may have weekly and/or monthly updates. I don't want the customers who use the on-site product to have to worry about updating it manually I just want it to download and install automatically ala Google Chrome. I'm planning on providing an OVF file with Ubuntu and the software installed and configured. My first thought on how to distributed software is to create six Apt repositories/channels (not sure which would be better at this point) that will be accessed through SSH using keys so if a customer doesn't renew their subscription we can disable their account: Beta - Used internally on test data to check the package for major defects. Internal - Used internally on live data to check the package for defects (dog fooding stage). External 1 - Deployed to 1% of our user base (randomly selected) to check for defects. External 9 - Deployed to 9% of our user base (randomly selected) to check for defects. External 90 - Deployed to the remaining 90% of users. Hosted - Deployed to the hosted environment. It will take a sign off at each stage to move into the next repository in case problems are reported. My questions to the community are: Has anyone tried something like this before? Can anyone see a downside to this type of a procedure? Is there a better way?

    Read the article

  • bash code in rc.local not excuting after bootup

    - by mrTomahawk
    Does anyone know why a system would not execute the script code within rc.local on bootup? I have a post configuration bash script that I want to run after the initial install of VMware ESX (Red Hat), and for some reason it doesn't seem to execute. I have the setup to log its start of execution and even its progress so that I can see how far it gets in case it fails at some point, but even when I look at that log, I am finding that didn't even started the execution of the script code. I already checked to see that script has execution permissions (755), what else should I be looking at? Here is the first few lines of my code: #!/bin/sh echo >> /tmp/configLog "" echo >> /tmp/configLog "Entering maintenance mode"

    Read the article

  • BizTalk: Dynamic SMTP Port: Unknown Error Description

    - by Leonid Ganeline
    Today I investigated one strange error working with Dynamic SMTP Port.   Event Type: Error Event Source: BizTalk Server 2006 Event Category: BizTalk Server 2006 Event ID: 5754 Date: ******** Time: ********AM User: N/A Computer: ******** Description: A message sent to adapter "SMTP" on send port "*********" with URI "mailto:********.com" is suspended. Error details: Unknown Error Description  MessageId:  {********} InstanceID: {********}   My code was pretty simple and the source of the error was hidden somewhere inside it.   msg_MyMessage(SMTP.CC) = var_CC; msg_MyMessage(SMTP.From) = var_From; msg_MyMessage(SMTP.Subject) = var_Subject; msg_MyMessage(SMTP.EmailBodyText) = var_Message;    // #1    msg_MyMessage(SMTP.SMTPHost) = " localhost "; msg_MyMessage(SMTP.SMTPAuthenticate) = 0; When I added line #2, this frustrating error disappeared.    msg_MyMessage(SMTP.EmailBodyTextCharset) = "UTF-8"; // #2 Conclusion: If we use the SMTP.EmailBodyText property, we must set up the SMTP.EmailBodyTextCharset property. To me it looks like a bug in BizTalk. [Maybe it is "by design", but in this case give us a useful error text!!!] And don't ask me how much time I've spent with this investigation.

    Read the article

  • Email client supporting multiple accounts

    - by TGP1994
    I've been using Microsoft Outlook for a very long time, although one thing that has bugged me is how multiple email accounts are handled. As far as I can tell, there isn't a set and straightforward way of managing multiple accounts in one instance of outlook. For example, when I create an email, saving it as a draft will by default dump it into the first personal folder that I have open, which in my current case, is not where I want it. I would like all trash, spam, drafts, contacts, etc. etc. to be handled on a PF by PF basis. Now to my question: Is there a way to accomplish the task of email account "segregation" in Outlook (2007 is my current version), or is there another client that handles this in a more organized fashion? Note: I don't use most of the features in outlook (I hardly even need special formatting for my messages), I generally just send and read mail, and get a few attachments, so leaving Outlook wouldn't be too much of a stretch for me.

    Read the article

  • MightyMintyBoost Is a 3-in-1 Gadget Charger

    - by ETC
    If you’re looking for a versatile battery booster, this DIY 3-in-1 solar/usb/wall current charger known as the MightyMintyBoost will top of your phone, mp3 player, and other gadgets with ease. Instructables user Honus didn’t just build the MightMintyBoost to geek out and show off his electronics project skills (although it’s certainly a nifty little project to do so), he’s serious about solar power and the impact clean energy has: Apple has sold over 30 million iPodTouch/iPhone units- imagine charging all of them via solar power…. If every iPhone/iPodTouch sold was fully charged every day (averaging the battery capacity) via solar power instead of fossil fuel power we would save approximately 50.644gWh of energy, roughly equivalent to 75,965,625 lbs. of CO2 in the atmosphere per year. Granted that’s a best case scenario (assuming you can get enough sunlight per day and approximately 1.5 lbs. CO2 produced per kWh used.) Of course, that doesn’t even figure in all the other iPods, cell phones, PDAs, microcontrollers (I use it to power my Arduino projects) and other USB devices that can be powered by this charger- one little solar cell charger may not seem like it can make a difference but add all those millions of devices together and that’s a lot of energy! His MightyMintyBoost is a battery booster for devices that can charge via USB and it accepts incoming current from the solar panel on top (or, on cloudy days can be charged via a wall charger or the USB port on your computer). Hit up the link below to see his full build guide and create your own MightyMintyBoost. MightyMintyBoost [Instructables] Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Oracle Partner Days and Oracle Days are coming to a city in EMEA near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • Supervisor VS cronjob

    - by Guandalino
    Actually I'm using supervisor to monitor a process and restart it when it stops for some reason. The problem is that in case of a supervisor crash the process stops get monitored. So I thought to schedule a cronjob to check supervisor is running, and eventually restart it. The next thing I'm considering is to get rid of supervisor and check my process directly from the cronjob. I read that sometimes supervisor uses too much memory (to be verified, though). What are the pros in having supervisor VS cronjob monitoring the process?

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • Is there a way to batch create DNS slave zones on a new slave DNS server?

    - by Josh
    I currently have a DNS server which is serving as a master DNS server for a number of our domains. I want to set up a brand new secondary DNS server. Is there any way I can automatically have BIND on the new server act as a secondary for all the domains on the primary server? In case it matters, I have Webmin on the primary server. I believe Webmin has an option to create a zone as a secondary on another server when creating a new master zone on one server, but I don;t know of any way to batch create secondary zones for a number of existing master zones. Maybe I'm missing something. Is there a way to "batch create" DNS slave zones on a brand new slave DNS server for all the DNS zones on an existing master?

    Read the article

  • Additional new content SOA & BPM Partner Community

    - by JuergenKress
    Oracle Fusion Middleware 12c (12.1.2.0.0) Released - Download (OTN, eDelivery) Whitepaper: Next Generation Service Integration Platform - PDF SOA Maturity This article in the Industrial SOA series offers exploration of the fundamentals of applying a factory approach to modern service-oriented software development. Read the article. Enterprise Service Bus The fifth article in the Industrial SOA series answers to some of the most important questions about the use of an enterprise service bus, using concrete examples to clarify areas of application that can be deemed correct for ESBs. Read the article. DevOps, Cloud, and Role Creep DevOps and cloud computing are changing the IT industry - and changing IT roles. An panel of community members discusses what’s happening and how it might affect your job. Listen to the podcast. Industrial SOA - Now chapters 1 to 5 available | Torsten Winterberg White Paper: Cloud Integration - A Comprehensive Solution White Paper: Next Generation Service Integration Platform : SOA Suite on Exalogic IT Briefcase Interview: An Integrated Approach to Mobile, Cloud, and API Management Technologies with Oracle Fusion Middleware Webcast: Oracle Cloud Integration – Information Week Webcast eBook: Oracle SOA Suite – In the Customers’ Words Podcast: Cloud Integration Transitioning from TIBCO to Oracle SOA Suite – Part 1 Events: Oracle Simplifying Integration of Cloud and On-Premise New B2B Book Published for Oracle SOA B2B 11g Get Fast-Data Accelerator in Your Hands Today: Mobile Data Offloading for Telecom Fast Data Accelerator - Blog New Oracle Process Accelerators in Financial Services & Teleco Detect, Analyze, Act Fast with BPM Improving the Quality of Healthcare with BPM Engineers Australia Improves and Automates Business Processes and Completes Engineer Enrollments up to 90% Faster with Middleware Platform - Case Study | PPT Specialized Partner Ataway on BPM Practice - Video eProseed Delivers Processes Skillfully with Oracle BPM Suite - Video Yarra Valley Water Uses SOA and BPM for Orchestration, Re-use and Visibility - Video Victoria University Discusses Oracle SOA & Oracle BPM - Video SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Using gluLookAt to move camera in 2D iPhone game ?

    - by Mr.Gando
    Hey guys, I'm trying to use gluLookAt to move the camera in my iPhone game, but every time I've tried to use gluLookAt my screen just goes "blank" ( grey in this case ) I'm trying to render a simple triangle and to move the camera, this is my code: to setup my scene I do: glViewport(0, 0, backingWidth, backingHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glRotatef(-90.0, 0.0, 0.0, 1.0); //using iPhone in horizontal mode glOrthof(-240, 240, -160, 160, -1, 1); glMatrixMode(GL_MODELVIEW); then my "triangle rendering" code looks like: GLfloat triangle[] = {0, 100, 100, 0, -100, 0,}; glClearColor(0.7, 0.7, 0.7, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableClientState(GL_VERTEX_ARRAY); glColor4f(1.0, 0.0, 0.0, 1.0); glVertexPointer(2, GL_FLOAT, 0, &triangle); glDrawArrays(GL_TRIANGLES, 0, 6); glDisableClientState(GL_VERTEX_ARRAY); This draws a red triangle in the middle of the screen, when I try to apply gluLookAt ( I got the implementation of the function from Cocos2D so I asume it's correct ), i do: glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt(0,0,1,0,0,0,0,0,1); // try to move the camera a bit ? GLfloat triangle[] = {0, 100, 100, 0, -100, 0,}; glClearColor(0.7, 0.7, 0.7, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnableClientState(GL_VERTEX_ARRAY); glColor4f(1.0, 0.0, 0.0, 1.0); glVertexPointer(2, GL_FLOAT, 0, &triangle); glDrawArrays(GL_TRIANGLES, 0, 6); glDisableClientState(GL_VERTEX_ARRAY); This leads me to grey screen (glClearColor is grey), I've tried all sort of things and read what I've found about gluLookAt on the net, but no luck :(, if someone could explain me or show me how to move to move the camera in a top-down fashion ( zelda, etc ), I would really appreciate it. Thanks!

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • Using Exception Handler in an ADF Task Flow

    - by anmprs
    Problem Statement: Exception thrown in a task flow gets wrapped in an exception that gives an unintelligible error message to the user. Figure 1 Solution 1. Over-writing the error message with a user-friendly error message. Figure 2 Steps to code 1. Generating an exception: Write a method that throws an exception and drop it in the task flow.2. Adding an Exception Handler: Write a method (example below) to overwrite the Error in the bean or data control and drop the method in the task flow. Figure 3 This method is marked as the Exception Handler by Right-Click on method > Mark Activity> Exception Handler or by the button that is displayed in this screenshot Figure 4 The Final task flow should look like this. This will overwrite the exception with the error message in figure 2. Note: There is no need for a control flow between the two method calls (as shown below). Figure 5 Solution 2: Re-Routing the task flow to display an error page Figure 6 Steps to code 1. This is the same as step 1 of solution 1.2. Adding an Exception Handler: The Exception handler is not always a method; in this case it is implemented on a task flow return.  The task flow looks like this. Figure 7 In the figure below you will notice that the task flow return points to a control flow ‘error’ in the calling task flow. Figure 8 This control flow in turn goes to a view ‘error.jsff’ which contains the error message that one wishes to display.  This can be seen in the figure below. (‘withErrorHandling’ is a  call to the task flow in figure 7) Figure 9

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • Easy Transfer from a dead computer

    - by Nathan DeWitt
    I had a computer that electrocuted me and the company sent me a new one. The hard drive from the old computer works fine and is in my new computer. I would like to transfer my files from the old drive to the new one, preferably using Easy Transfer (old & new computers were Win7). When I go through the Easy Transfer wizard, it assumes my old computer is running and that I can run a process to backup all my data to a single file. However, in my case I have the system drive in my new computer and want to pull the data off it. I would like to avoid rebooting the old computer, to avoid damage to myself or my data. I would like to avoid booting into the old system drive, as my new hardware is significantly different and I imagine I'll run into some missing hardware issues. What's the easiest way to get my data off this drive?

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >