Search Results

Search found 3695 results on 148 pages for 'failure'.

Page 87/148 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • USB blocks suspend on a Gigabyte GA-890GPA-UD3H with ATI SB700/SB800

    - by poolie
    Following on from question 12397, I'd still like to get suspend working on my Phenom II X6 / GA-890GPA desktop machine running current Maverick. When I run pmi action suspend the machine doesn't crash, but it also doesn't suspend. The kernel logs show: PM: Syncing filesystems ... done. PM: Preparing system for mem sleep Freezing user space processes ... (elapsed 0.02 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. PM: Entering mem sleep Suspending console(s) (use no_console_suspend to debug) pm_op(): usb_dev_suspend+0x0/0x20 returns -2 PM: Device usb8 failed to suspend async: error -2 PM: Some devices failed to suspend PM: resume of devices complete after 0.430 msecs PM: resume devices took 0.000 seconds PM: Finishing wakeup. Restarting tasks ... done. PM: Syncing filesystems ... I've tried disconnecting all the USB devices, and then connecting in to run pmi over ssh, and I get the same failure. With everything unplugged, I see the following usb devices: Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub and lspci shows the physical devices are: 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:16.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) Booting with no_console_suspend makes no difference.

    Read the article

  • Jenkins Paramerized Trigger + Copy Artifact

    - by Josh Kelley
    I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: The Windows portion and Linux portion are set up as separate Jenkins projects. The Windows project is parameterized, taking the Subversion tag to build and release. As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a "build selector" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?

    Read the article

  • How to install .deb file from within preinst script

    - by Ashwin D
    I have my own application packaged using dpkg. The application depends on several deb files which I'm trying to install from within the preinst script of my application. The preinst script checks if a dependent deb file is installed, if not it goes to installt it using the dpkg -i command. This is repeated for all the dependent deb files needed by the main application. When I try to install the main application using dpkg -i, the commands returns failure when trying to execute the preinst script. Below is that error message. dpkg: error: dpkg status database is locked by another process I deleted /var/lib/dpkg/lock file and retried to install the application. But to no avail. If I run the preinst script separately like any other shell script, it runs without any issue. All the deb files will be installed properly. So, the issue is only when this preinst script is being run automatically by the dpkg -i command. I'm lost trying to determine the root cause. If anyone can shed some light on what the real issue might be, their help will be greatly appreciated. Thank you. Ashwin

    Read the article

  • Problem with dpkg-preconfigure, how to correct?

    - by Eric Wilson
    I was trying to install TeamViewer, and I followed the instructions here even though they specify 11.10 instead of 12.04 (what I'm running). In particular, I executed. $ wget http://www.teamviewer.com/download/teamviewer_linux.deb $ sudo dpkg -i teamviewer_linux.deb The dpkg command failed, and after this point my packaging system has been broken. The software center instructs me to try: $ sudo apt-get -f install which leads to Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7:i386 0 upgraded, 0 newly installed, 1 to remove and 17 not upgraded. 9 not fully installed or removed. Need to get 89.0 kB of archives. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com/ubuntu/ precise/main dash amd64 0.5.7-2ubuntu2 [89.0 kB] Fetched 89.0 kB in 1s (83.9 kB/s) E: Sub-process /usr/sbin/dpkg-preconfigure --apt || true returned an error code (100) E: Failure running script /usr/sbin/dpkg-preconfigure --apt || true At this point I'm stumped.

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Why doesn't it seem to be any development in the field of 3D VR gear, especially with regard to gaming?

    - by neuviemeporte
    I remember that way back around 1995, there was this big craze with VR in the media, a whole bunch of (mostly mediocre) games labeled as "virtual-reality-interactive-movie (...)" were published. If I recall correctly, the first 3D VR helmet was called VFX-1 and was sold bundled with Descent and some dedicated joystick. I never owned one, and I read just one review which was mostly enthusiastic, but pointed to some weak points, like the eyes getting tired after an hour or so of playing. Then the whole thing basically flickered down and died. I suppose the main reason it wasn't successful was that the hardware of the day was not powerful enough, the VR gear's design wasn't perfected to make it comfortable and natural to use, and the companies that made it failed to market it successfully. What I can't understand is why isn't there any development in the field today. There is some vr-ish hardware mostly targeted at the consoles (Kinect, Wii remote, TrackIR), but all projects of creating some 3d head-mounted display system seem to be in early infancy, appear once in a trade show somewhere and aren't heard of again. I think it could work great with head tracking and some of today's shooters, flight sims (trackIR is nice but the movement scale translation is awkward) and other games with an FPP POV. Is there any technological reason why decent vr headgear can't be made today, or is it just that nobody really cares/everyone is scared to repeat the '90s failure?

    Read the article

  • Lenovo Y460 Intel Driver Secondary Display Flickering

    - by ultimatebuster
    This is a part of the massive dump of problems I'm encountering with my Lenovo Y460 and Ubuntu. Problem: ATI PowerXpress doesn't really work. Doesn't work as I have to use the open source driver with hacks. Turned off ATI card at boot Details on how I accomplished that: http://ubuntuforums.org/showthread.php?p=10955831#post10955831 Installing the ATI drivers results in a failure of the intel drivers to work with Ubuntu Class (all animations have to turned off). Anyway to fix this problem to allow switchable graphics to work? The problem above has been fixed by FGLRX (Catalyst 11.6) is it compatible with kernel 2.6.39? However, there's another issue. If I connect my secondary monitor (VGA 17'') while using the Intel driver, I would not be able to use that screen as there's flickering and tearing, making the screen blurry and usable. Here's the fglrxinfo: $ fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile GEM 20100330 DEVELOPMENT OpenGL version string: 1.4 (2.1 Mesa 7.10.2) Any fixes for that? Potential related bug report on launchpad: https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/750259 However I can't confirm because the video showing that is much more dramatic than what I have, mine are tiny flickering that won't be captured by video cameras as I've tried, but enough to make it blurry for humans.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-26

    - by Bob Rhubart
    Oracle Introduces Free Version of Oracle Application Development Framework Several community bloggers have already written about Oracle Application Development Framework (ADF) Essentials, the free version of Oracle ADF. Here's the official press release. ADF Essentials - Quick Technical Review | Andrejus Baranovskis "This post is just a quick review for ADF Essentials on Glassfish," says Oracle ACE Director Andrejus Baranovskis. "I will do a proper performance test soon to compare ADF performance on 5 ways to think like a cloud architect | ZDNet "Is enterprise architecture ready for the cloud? Is the cloud ready for EA?" Joe McKendrick asks. "Cloud represents a different way of thinking. But we've been here before." Configuring trace file size and number in WebCenter Content 11g | Kyle Hatlestad A quick tip from Oracle Fusion Middleware A-Team member Kyle Hatlestad. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Problem with a* implementation in pygame

    - by piyush3dxyz
    Yesterday i decide to make RTS game in pygame(pygame is best).I figured out many components of RTS game like unit selecting,health,resources but only 1 thing i still not understand.. which is a* pathfinding in pygame... I also done little bit of research on wiki,articles and papers...but still cant figure out problem.... function A*(start,goal) closedset := the empty set // The set of nodes already evaluated. openset := {start} // The set of tentative nodes to be evaluated, initially containing the start node came_from := the empty map // The map of navigated nodes. g_score[start] := 0 // Cost from start along best known path. // Estimated total cost from start to goal through y. f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal) while openset is not empty current := the node in openset having the lowest f_score[] value if current = goal return reconstruct_path(came_from, goal) remove current from openset add current to closedset for each neighbor in neighbor_nodes(current) if neighbor in closedset continue tentative_g_score := g_score[current] + dist_between(current,neighbor) if neighbor not in openset or tentative_g_score <= g_score[neighbor] came_from[neighbor] := current g_score[neighbor] := tentative_g_score f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal) if neighbor not in openset add neighbor to openset return failure here is the pseudocode for wiki a* implementation......

    Read the article

  • Software Center - Items cannot be installed or removed until package catalog is repaired"

    - by Stephanie
    I tried to install back in time and now I keep getting the message 'items cannot be installed or removed until package catalog is repaired. I have tried sudo apt-get install -f then get Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: backintime-gnome The following packages will be upgraded: backintime-gnome 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 1 not fully installed or removed. Need to get 0 B/39.4 kB of archives. After this operation, 24.6 kB of additional disk space will be used. Do you want to continue [Y/n]? when I click Y, I get the following message dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: backintime-gnome E: Sub-process /usr/bin/dpkg returned an error code (1) stephanie@stephanie-ThinkPad-T61:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of backintime-gnome: backintime-gnome depends on backintime-common (= 1.0.7); however: Version of backintime-common on system is 1.0.8-1. dpkg: error processing backintime-gnome (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: backintime-gnome

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • Ubuntu 11.10 loads from live usb fine, but boots to black screen from harddrive. Why?

    - by Estel
    A few days ago I had a hard drive failure, which was running Windows XP (32-bit) just fine. The second hard drive in my computer held a few unimportant files, so I formatted it in the Ubuntu setup and installed 11.10 without a hitch. I had been using it for about a week, but decided to install Windows 7 (64-bit) in order to utilize Networking with my home server (running Windows Server 2000). My system is 64-bit based, and thus I had no problems installing other than a basic RAM error that required me to remove my RAM down to a single stick. I played with the settings in Windows 7 for around an hour before I shut down. After reinstalling the RAM, Windows 7 would not boot. In this, I then assumed that something about my system was rejecting Win7 and I reinstalled Ubuntu. However, now Ubuntu (11.10) boots into black screen, and I've already attempted activating the grub menu with the shift key, and following steps listed here: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen but nothing seems to work. I've reinstalled twice now, with the same result each time. Now, the very odd part about this whole scenario is that the USB I installed from has no problems booting as a live USB. This puzzles me greatly, because the hard drive boots straight to black screen and the live USB loads normally. At this point, my only theory is that the boot sector of the hard disk was somehow corrupted with Win7, and that Ubuntu was unable to completely write through. I used Darik's Boot n Nuke to wipe the drive, but was met with an error, this also puzzles me because the hard disk has no promblems reading or writing. Any suggestions/comments are appreciated. If you have a theory, I will be more than happy to oblige. Additional information: Intel Core2 Duo e6400 2.13GHz nVidia GeForce 7-series (7900 GS) 4 GB DDR2 333MHz (2x 2GB) Dell XPS 410 BIOS Revision 2.5.3

    Read the article

  • Areas of support needed when attempting to roll out a new software system

    In general, I think most people tend to be resistant to new systems or even change because they fear the unknown. Change means that their normal routine will be interrupted until they can learn to conform to the new routine due to the fact that it has transformed to the old routine. In addition, the feeling of failure is also generates a resistance to change. Why would a worker want to move from a process that has worked successfully for them in the past? Their fears over shadow any benefits a change in a new system or business process will bring to their work life. Areas of support needed when attempting to roll out a new software system: Executive/Upper Management Support If there is no support from the top of an organization how will employees be supportive of the new system? Proper Training Employees need to train on a new system prior to its rollout. The more training employee’s receive on any new system will directly impact how comfortable they will be with the system and are more accepting of the change because they can see how the changes will benefit them. Employee Incentives One way to re-enforce the need for employees to use a new system is to offer incentives to ensure that the system will be used. Employee Discipline/Termination If employees are adamantly refusing to use the new system after several warnings then they need to be formally reprimanded.  If this does not work the employer is forced to replace the employees.

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Webmatrix The Site has Stopped Fix

    - by Tarun Arora
    I just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’” Step 00 – Analysis It took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up. So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure Step 1 – Fixing “The following site has stopped ‘xxx’” Further analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them). Come back to CMD and run IISExpress /trace:Error IIS Express successfully started and parked itself in the system tray icon. I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures. Step 2 – Download WordPress Azure WebSite using WebMatrix Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out… Open up WebMatrix and go to the Remote tab, click on Download Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step Now you should have your Azure WordPress website all set up & running from WebMatrix. Enjoy!

    Read the article

  • License validation and calling home

    - by VitalyB
    I am developing an application that, when bought, can be activated using a license. Currently I am doing offline validation which is a bit troubling to me. I am aware there is nothing to do against cracks (i.e modified binaries), however, I am thinking to trying to discourage license-key pirating. Here is my current plan: When the user activates the software and after offline validation is successful, it tries to call home and validate the license. If home approves of the license or if home is unreachable, or if the user is offline, the license gets approved. If home is reached and tells the license is invalid, validation fails. Licensed application calls home the same way every time during startup (in background). If license is revoked (i.e pirated license or generated via keygen), the license get deactivated. This should help with piracy of licenses - An invalid license will be disabled and a valid license that was pirated can be revoked (and its legal owner supplied with new license). Pirate-users will be forced to use cracked version which are usually version specific and harder to reach. While it generally sounds good to me, I have some concerns: Users tend to not like home-calling and online validation. Would that kind of validation bother you? Even though in case of offline/failure the application stays licensed? It is clear that the whole scheme can be thwarted by going offline/firewall/etc. I think that the bother to do one of these is great enough to discourage casual license sharing, but I am not sure. As it goes in general with licensing and DRM variations, I am not sure the time I spend on that kind of protection isn't better spent by improving my product. I'd appreciate your input and thoughts. Thanks!

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >