Search Results

Search found 22627 results on 906 pages for 'program transformation'.

Page 552/906 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Ubuntu 12.04 LTS won't install - never finishes please help

    - by Richard Higgins
    Want to try Ubuntu after using Windows for 30 years. Tried to install it 5 times on a Lenovo X120e notebook and twice on a Lenovo M57 desktop. No luck, worse than what Microsoft puts you through. I burned 12.04 LTS to disc. It installs up to the "Who Are You?" screen, then stops. Accepted the recommended computer name and lower case user name. I chose "log me in automatically." After that there is no progress bar, no rotating or pulsing button, nothing to indicate the Ubuntu has not died or fallen asleep. Is that how it is written? Never heard of a program that would take a long time to install while a user looked at a locked, dead screen. I just bought the M57 desktop for my son. It came with Ubuntu 10 something. I wanted to upgrade to 12.04 but it crashed, twice, to a DOS screen saying the pc lacked a certain "init" file. Various help screen commands did not help. On the X120e, I thought a partial-failed Ubuntu install was causing the problem, so I removed the drive and deleted the Ubuntu partition and replaced it. But same result. After I fill in my name, accept computer and user name, the "continue" button does not appear to work. I can go "back" but not forward. I have waited torturous hours. It doesn't take more than two hours to install, does it?any It is my own fault because of the high expectations I had for a sensible, hassle-free installation, but I am immensely disappointed. Thank you for any response

    Read the article

  • I want to be a programmer, work in corporate environment, earn well, learn fast and eventually become a great programmer [on hold]

    - by Shin San
    I'll try to keep this simple: I'm 29, been dabbling with computers for the past 10 years, had entry level jobs in tech support for different apps, been fixing computers for a while and now want to specialize in something. I'm not 100% stranger to programming but haven't gone past if/then/else with anything. A bit of JavaScript, PHP, Python and currently checking out the "SELECT" statement in SQL :)) I'm curious about programming, I enjoy it and I'm thinking of making a living out of it. So, while I'm at it, why not earn a bit more than the average Joe? So, that's why I'm checking what the best solution, the best learning path and the most useful languages are considering: a) how easy/fast can you find a job by knowing it b) how much would I be able to earn c) how fast can I learn it By reading 10-20 articles online I've come up with an example, but I'm here for some expert advice. Example: * ratings from a) and b) point of view #1 sql ; #2 java ; #3 html (please don't start the markup language debate) ; #4 javascript From this ratings, I'd say a good way to go is learn html/css/(javascript or php) for the web part of apps, some SQL/MySQL/whateverSQL for holding data and loads of Java for the program itself. Please let me know if this is a good idea and if so, what should be the order for learning all of the above. Else, please let me know a better way and why it would be better. Many thanks for taking the time to read my question. Best wishes to you guys Edit: if I think Java + SQL + HTML&JavaScript is the way to go, does the order I'm learning them in matter? Or can I try to learn them all at once?

    Read the article

  • C# 4.0 Optional/Named Parameters Beginner&rsquo;s Tutorial

    - by mbcrump
    One of the interesting features of C# 4.0 is for both named and optional arguments.  They are often very useful together, but are quite actually two different things.  Optional arguments gives us the ability to omit arguments to method invocations. Named arguments allows us to specify the arguments by name instead of by position.  Code using the named parameters are often more readable than code relying on argument position.  These features were long overdue, especially in regards to COM interop. Below, I have included some examples to help you understand them more in depth. Please remember to target the .NET 4 Framework when trying these samples. Code Snippet using System;   namespace ConsoleApplication3 {     class Program     {         static void Main(string[] args)         {               //C# 4.0 Optional/Named Parameters Tutorial               Foo();                              //Prints to the console | Return Nothing 0             Foo("Print Something");             //Prints to the console | Print Something 0             Foo("Print Something", 1);          //Prints to the console | Print Something 1             Foo(x: "Print Something", i: 5);    //Prints to the console | Print Something 5             Foo(i: 5, x: "Print Something");    //Prints to the console | Print Something 5             Foo("Print Something", i: 5);       //Prints to the console | Print Something 5             Foo2(i3: 77);                       //Prints to the console | 77         //  Foo(x:"Print Something", 5);        //Positional parameters must come before named arguments. This will error out.             Console.Read();         }           static void Foo(string x = "Return Nothing", int i = 0)         {             Console.WriteLine(x + " " + i + Environment.NewLine);         }           static void Foo2(int i = 1, int i2 = 2, int i3 = 3, int i4 = 4)         {             Console.WriteLine(i3);         }     } }

    Read the article

  • Move a 2D square on y axis on android GLES2

    - by Dan
    I am trying to create a simple game for android, to start i am trying to make the square move down the y axis but the way i am doing it dosent move the square at all and i cant find any tutorials for GLES20 The on draw frame function in the render class updates the users position based on accleration dew to gravity, gets the transform matrix from the user class which is used to move the square down, then the program draws it. All that happens is that the square is drawn, no motion happens public void onDrawFrame(GL10 gl) { user.update(0.0, phy.AccelerationDewToGravity); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); // Re draws black background GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT, false, 12, user.SquareVB);//triangleVB); GLES20.glEnableVertexAttribArray(maPositionHandle); GLES20.glUniformMatrix4fv(maPositionHandle, 1, false, user.getTransformMatrix(), 0); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } The update function in the player class is public void update(double vh, double vv) { Vh += vh; // Increase horrzontal Velosity Vv += vv; // Increase vertical velosity //Matrix.translateM(mMMatrix, 0, (int)Vh, (int)Vv, 0); Matrix.translateM(mMMatrix, 0, mMMatrix, 0, (float)Vh, (float)Vv, 0); }

    Read the article

  • Cannot install Ubuntu on an Acer Aspire One 756

    - by Byron807
    I have used Ubuntu before, in virtual machines, but today I decided to make the leap and I bought a netbook to install Ubuntu as a "real" OS alongside Windows. The netbook I bought is an Acer Aspire One 756, with a 64-bit Intel processor, 4GB RAM, and Windows 8 as the default OS. I have now encountered several obstacles that actually prevent me from installing Ubuntu 12.10. Here are all the things I have tried so far: Used a live CD, in combination with a USB DVD drive. (I should point out that the Aspire One does not have an optical drive.) The computer does not boot in Ubuntu; the drive keeps spinning, but nothing happens, even though I changed the boot order in the BIOS. Used a USB drive created via the tool available on pendrivelinux.com. Again, I've made changes to the BIOS to make sure the computer tries to boot from USB before using the built-in HDD. The results vary in this case: sometimes, the computer keeps rebooting like crazy until I remove the USB drive, at which point the computer boots into Windows 8, as expected. If I use a different USB drive, I get an error message that says that the USB drive has been blocked due to "the current security policy". Tried to install Ubuntu via Wubi. The program appears to install something, but at some point during the installation process, I get a non-specified error message and nothing else happens. I am not sure if these are known issues; in any case, searching the forum has not yielded any results, so I thought I should simply describe my problem here in the hope that this question has not been answered before. I would greatly appreciate any help with this annoying problem. Of course, if anything is unclear, do not hesitate to ask for further details.

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • How do I fix "Setup did not find any hard disk drives installed in your computer" error during Win X

    - by CT
    I just bought a nettop. It came with WinXP Home. I first installed Win 7 on it. I wasn't that happy with the performance so I decided to go back to XP. I am using an external dvd drive and a Win XP Pro disc. I boot from the dvd drive and during the install get this error: Setup did not find any hard disk drives installed in your computer. Make sure any hard disk drives are powered on and properly connected to your computer, and that any disk-related hardware configuration is correct. This may involve running a manufacturer-supplied diagnostic or setup program. Setup cannot continue. To quit Setup, press F3. This is the nettop in question: http://www.newegg.com/Product/Product.aspx?Item=N82E16883103228

    Read the article

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • Video game "Gish" will only launch from command line

    - by aberration
    Platform: Lubuntu 11.10 x64 Program: Gish When I try to launch Gish from the command line (/opt/gish/gi.sh), there are no problems. But when I try to launch it from the LXDE menu, it will not start. Contents of /usr/share/applications/gish.desktop: [Desktop Entry] Categories=Game;ActionGame;AdventureGame;ArcadeGame; Exec=/opt/gish/gi.sh Path=/opt/gish Icon=x-gish Terminal=false Type=Application Name=Gish I tried changing Terminal=false to Terminal=true to debug it, but then I just got a blank terminal, and the game didn't start. Edit: Here is some additional information, as requested by Eliah Kagan below: I tried editing /usr/share/applications/gish.desktop, as recommended, but it had no effect However, ~/.xsession-errors contained the following error: [: 8: x86_64: unexpected operator ./gish_32: error while loading shared libraries: libGL.so.1: wrong ELF class: ELFCLASS64 I think there's a problem with the /opt/gish/gi.sh shell script. This is its contents: cd /opt/gish/ MACHINE_TYPE=`uname -m` if [ ${MACHINE_TYPE} == 'x86_64' ]; then ./gish_64 else ./gish_32 fi I'm not too familiar with Bash, so hopefully someone else can point out the error. I have a 64-bit machine. I think that when the script is run from the command line, it's properly launching the 64-bit version (/opt/gish/gish_64), but when it's run from the LXDE menu, it's launching the 32-bit version (/opt/gish/gish_32), which is causing the libGL.so.1 error. However, this may be related to my libGL.so.1 problems with 2 other games.

    Read the article

  • An Oracle's Interns Story by Samarth Varshney

    - by user769227
    I have written a short write-up about my experience at Oracle and am attaching some pics along:  I joined Oracle on 5th January 2011 as part of my internship program in BITS Pilani Goa Campus. In the short period of six months, I had the most beautiful and interesting time of my life. It was fun to work in Oracle, thanks to the whole team. I had an excellent manager, simple and sophisticated, who gave me the utmost enthusiasm to work. I gained a lot of knowledge during my internship, thanks to my colleagues. They were very helpful and motivated us (interns) in every possible way. In the initial stages of work, in which you know almost nothing, they helped me gain knowledge at a rapid speed. Thanks to the vast database of study material at the Oracle site, that I could start on with my project in a very short time.  For me, the time flew like anything and made the 6 months look like a few days. It was probably due to the team, that the work was so much fun. We had our deadlines but had full freedom as to how to work and when to work. I don't remember a single instance, in which I was working and not listening to songs. I mean it will always be a time to remember. I hope to join this company and make this time last forever.  Samarth 

    Read the article

  • ERROR: 6553609 You are not authorized to perform the current operation

    - by Tim
    I'm trying to backup a sub site in my protal using Smigrate. When logged into the server and running the following command: D:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\60\BINsmigrate -w -f C:\CH_Test_04_backup.fwp -u domain\user -pw ** 20 Nov 2009 13:57:06 -0000 Site collection opened 20 Nov 2009 13:57:07 -0000 Authenticating against the web server. 20 Nov 2009 13:57:07 -0000 Already authenticated against the web server. 20 Nov 2009 13:57:07 -0000 ERROR (possibly fatal): ERROR: 6553609 You are not authorized to perform the current operation. 20 Nov 2009 13:57:07 -0000 Site collection closed I get an error (ERROR: 6553609 You are not authorized to perform the current operation) and the sites does not get backed up. I've tried solutions oulined in various KB articles but had no luck with them. Can anyone help? Regards Tim

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Breakout ball collision detection, bouncing against the walls

    - by Sri Harsha Chilakapati
    I'm currently trying to program a breakout game to distribute it as an example game for my own game engine. http://game-engine-for-java.googlecode.com/ But the problem here is that I can't get the bouncing condition working properly. Here's what I'm using. public void collision(GObject other){ if (other instanceof Bat || other instanceof Block){ bounce(); } else if (other instanceof Stone){ other.destroy(); bounce(); } //Breakout.HIT.play(); } And here's by bounce() method public void bounce(){ boolean left = false; boolean right = false; boolean up = false; boolean down = false; if (dx < 0) { left = true; } else if (dx > 0) { right = true; } if (dy < 0) { up = true; } else if (dy > 0) { down = true; } if (left && up) { dx = -dx; } if (left && down) { dy = -dy; } if (right && up) { dx = -dx; } if (right && down) { dy = -dy; } } The ball bounces the bat and blocks but when the block is on top of the ball, it won't bounce and moves upwards out of the game. What I'm missing? Is there anything to implement? Please help me.. Thanks

    Read the article

  • Security Audit Failures in Event Viewer Windows Server 2008R2

    - by Jacob
    When I am looking at the security tab of my event viewer on a Windows Server 2008 R2, I am showing a ton of Audit Failures with Event ID 4776. The computer attempted to validate the credentials for an account. Authentication Package: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 Logon Account: randy Source Workstation: HPDB1 Error Code: 0xc0000064 I verified the account "randy" exist in my Active Directory. From my understanding, there has not been any recent password changes. Is there any way to get detailed information on this error? I am wondering what program is requesting this information. Also, is there any way to clear this error up? I was thinking about resetting the password and changing it back to the original.

    Read the article

  • Mavericks Dock Lag

    - by Syle
    Im having an weird bug on my MacBook Pro. When I start the machine, the dock have some graphic lag, the same when I open and close windows. The solution is simple, just open the AutoCAD and close it and the bug is gone. The same if I open AutoDesk Maya or other similar 3D program. My question where is not how to fix it, but why it happens. Its like Mavericks on the start thinks: Well you can only use a little memory of the graphic card! But if i open AutoCad, its like: Oh you are opening AutoCad so its better starting boosting the graphic card and giving you more memory. I bet there is something that is forcing Mavericks to think like this. Any help is welcome! =)

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • NEW! Oracle Unified Business Process Management Specialization!

    - by michaela.seika(at)oracle.com
    Be one of the very first to become an Oracle Unified Business Process Management Specialist! Check out the Oracle Unified Business Process Management Knowledge Zone and go to the Specialization criteria to learn how you can become an BPM Specialized Partner. Pass the following assessment tests and exam to meet the competency criteria: · Oracle Unified Business Process Management Suite 11g Sales Specialist · Oracle Unified Business Process Management Suite 11g PreSales Specialist Assessment · Oracle Unified Business Process Management Suite 11g Essentials (1Z1-560) Exam Go to the OPN Competency Center to access the Specialist Guided Leaning Paths and Boot Camp to get the product information that can help you pass the tests: · Oracle Unified Business Process Management 11g Sales Specialist GLP · Oracle Unified Business Process Management 11g PreSales Specialist GLP · Oracle Unified Business Process Management 11g Implementation Specialist GLP · Oracle Unified Business Process Management Suite 11g Implementation Boot Camp Oracle Unified Business Process Management Suite 11g Essentials (1Z1-560) Exam is available in beta testing. Pass the exam to become an Oracle Unified Business Process Management Certified Implementation Specialist! As an incentive we are offering FREE beta exam vouchers to early-adopter Partners. As there are a limited number of free vouchers available, the requests will be processed on a first-come, first-served basis. To request a voucher send an e-mail to: [email protected] specifying the exam name, and your contact information: name, job role, and company name. Register for the OU beta exam at the nearest Pearson VUE testing center. For More Information Oracle Certification Program Beta Exams OPN Certified Specialist exams OPN Certified Specialist FAQ

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Starting with text based MUD/MUCK game

    - by Scott Ivie
    I’ve had this idea for a video game in my head for a long time but I’ve never had the knowledge or time to get it done. I still don’t really, but I am willing to dedicate a chunk of my time to this before it’s too late. Recently I started studying Lua script for a program called “MUSH Client” which works for MU* telnet style text games. I want to use the GUI capabilities of Mush Client with a MU* server to create a basic game but here is my dilemma. I figured this could be a suitable starting place for me. BUT… Because I’m not very programmer savvy yet, I don’t know how to download/install/use the MU* server software. I was originally considering Protomuck because a few of the MU*s I were more impressed with began there. http://www.protomuck.org/ I downloaded it, but I guess I'm too used to GUI style programs so I'm having great difficulty figuring out what to do next. Does anyone have any suggestions? Does anyone even know what I'm talking about? heh..

    Read the article

  • How is software used in critical life-or-death systems tested?

    - by waiwai933
    An airplane, as opposed to, for example, a website, is a system where any failure in certain systems is completely unacceptable, since errors in e.g. flight monitoring can cause the autopilot to malfunction and do a dive. Obviously, this doesn't happen since the brilliant engineers at Boeing and Airbus have checks in the autopilot to make sure it doesn't suddenly decide a dive is a perfectly acceptable and safe maneuver. Or perhaps the computer crashes, and the pilots in the newer fly-by-wire aircraft can no longer actually fly the plane. Of course, there are various safety procedures and redundancies built into these systems to prevent a crash (of both the software and the aircraft). However, on the other hand, it's quite obvious that software isn't perfect—both open source and closed source software do crash regularly, and only the simplest "Hello World" program doesn't fail. How can the engineers who design the software systems in the aeronautic, medical, and other life-or-death industries manage to test their software so that it doesn't fail (and if it does fail, at least fail gracefully)? I'm desperately hoping that you're not all going to go: "Oh, I work for Boeing/Airbus/(some other company) and it's not! Have fun on your next flight/hospital visit."

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • Custom daemon script: works, but does not run at boot / startup

    - by pearjoint
    this is Ubuntu 10.10 Maverick. I have the following shell script in init.d that I want to run as a "daemon" (background service with start/stop/restart really) at system startup. There is a symlink in rc3.d. I tried 4 and 5 too. (Ideally this would initialize before graphical login happens and before a user logs in.) IMPORTANT: the script works 100% as expected and required when testing this with service MetaLeapDaemon start and service MetaLeapDaemon stop. (This shell script calls a Python program which makes sure the appropriate .pid files are both created at startup and deleted at exit.) So generally it works fine but now my only issue is why it will not be run at any of the run-levels I tried. I know for sure it isn't run because the log file it normally creates does not get created. As you can see (by the lack of any uid:gid args in the start-stop-daemon commands) this would currently run only under root, is this forbidden in a default setup? Here's the script, pretty much your run-off-the-mill daemon script really: #! /bin/sh DAEMON=/opt/metaleap/_core/daemon/MetaLeapDaemon.py NAME=MetaLeapDaemon DESC="MetaLeapDaemon" test -f $DAEMON || exit 0 set -e case "$1" in start) start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; stop) start-stop-daemon --stop --pidfile /var/run/$NAME.pid ;; restart) start-stop-daemon --stop --pidfile /var/run/$NAME.pid sleep 1 start-stop-daemon --start --pidfile /var/run/$NAME.pid --exec $DAEMON ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart}" >&2 exit 1 ;; esac exit 0

    Read the article

  • Allow READ access to local folders in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Everytime I would uncheck the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • Estimate of Hits / Visits / Uniques in order to fall within a given Alexa Tier?

    - by Alex C
    I was wondering if anyone could offer up rough estimates that could tell me how many hits a day move you into a given Alexa rank ? Top 5,000 Top 10,000 Top 50,000 Top 100,000 Top 500,000 Top 1,000,000 I know this is incredibly subjective and thus the broad brush strokes with the number ranges... BUT I've got a site currently ranked just over 1.2M worldwide and over 500k in the USA (http://www.alexa.com/siteinfo/fstr.net) Pretty cool for something hand-built on weekends (pat self on back) I was applying to an ad-platform and was told that their program doesn't accept webmasters who have an Alexa rank of greater than 100,000. (Time to take back that pat on the back I guess). I know that my hits in the last 30 days are somewhere on the order of 15,000 uniques and 20,000 pageviews. So I'm wondering how much harder do I have to work to achieve my next "goals"? I'd like to break into the top million, then re-evaluate from there. It'd be nice to know what those targets translate into (very roughly of course). I imagine that alexa ranks and tiers become very much exponential as you move up the ranks, but even hearing annecdotal evidence from other webmasters would be really useful to me. (ie: I have a site that is ranked X and it got Y hits in the last 30 days) Thanks :) - Alex

    Read the article

  • Unlock file on Windows Server 2003 by remote desktop without rebooting

    - by BalusC
    We've several Windows Server 2003 machines running, each with its own purposes. There are scheduled jobs which synchronizes some files over SFTP using WinSCP. Very sometimes a newly copied file is left locked in the "inbox" folder without any reason. The machine's own background task (programmed in Java) can't move it to the "processed" folder anymore after processing it. Manually moving it only yields the well known error message Cannot move [filename]: it is being used by another person or program. The only resort is to reboot the machine, but we would of course like to avoid that. Any suggestions? I tried Unlocker which works fine locally at WinXP, but doesn't work at those Win2K3 machines by remote desktop (unlock option doesn't show up in rightclick context menu). I tried Process Explorer as well as described in this blog article, but it caused the server to crash (not sure if that's because it's executed through remote desktop).

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >