Search Results

Search found 4866 results on 195 pages for 'crm failure'.

Page 123/195 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Another Exchange 2003 to Exchange 2010 mail flow issue

    - by Ryan Roussel
    During a migration recently, we came across another internal mail routing issue.  The symptoms were identical to my previous post about Exchange internal mail routing.  Mail was flowing from 2010 to 2003, from 2010 to the internet, but not from 2003 to 2010.   I went through the normal check list looking at permissions, DNS, and the routing group connectors.  I verified that both servers listed in the routing group connectors were the routing master in their respective routing groups through the 2003 ESM.  I also verified that inheritable permissions were enabled for the Exchange 2003 server object in the schema.  No luck with either.   For my previous post about this issue in which inheritable permissions were the culprit: Exchange 2010, Exchange 2003 Mail Flow issue   And for Routing Group issues: Exchange 2007 Routing Group Connector Mayhem   I finally enabled logging on the SMTP virtual server on Exchange 2003 and the Default Receive Connector on 2010 and sent a few test e-mails where I found 2003 was having issues authenticating to 2010.  By default 2003 uses Exchange Server Authentication to communicate to 2010. The exact error was: 4.7.0 Temporary Authentication Failure which was found in the SMTP logs on the Exchange 2003 side   After scouring based on this error, I found the solution:   The Access this computer from the network user rights in the local computer policy on the Exchange 2010 server were changed from the default.  The network administrator had modified the Default Domain policy and changed this user right assignment to only list Domain Users.   The fix was to clear this setting in the Default Domain policy,  force gpupdate to refresh the group policy settings, then ensure the appropriate users and groups were listed.   This immediately fixed the problem and the Exchange 2003 server was able to route mail to the Exchange 2010 mailboxes.   The default user rights assignments for Access this computer from the network On Workstations and Servers: Administrators Backup Operators Power Users Users Everyone On Domain Controllers: Administrators Authenticated Users Everyone More can be found here: http://technet.microsoft.com/en-us/library/cc740196(WS.10).aspx

    Read the article

  • "Oracle Coherence 3.5" Book - My Humble Review

    - by [email protected]
      After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections. Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence Security Regarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socket Another very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.  

    Read the article

  • Problems with Ubuntu and AMD A10-4655M APU

    - by Robert Hanks
    I have a new HP Sleekbook 6z with AMD A10-4655M APU. I tried installing Ubuntu with wubi--the first attempt ended up with a 'AMD unsupported hardware' watermark that I wasn't able to remove (the appeared when I tried to update the drivers as Ubuntu suggested) On the second attempted install Ubuntu installed (I stayed away from the suggested drivers) but the performance was extremely poor----as in Windows Vista poor. I am not sure what the solution is--if I need to wait until there is a kernel update with Ubuntu or if there are other solutions--I realise this is a new APU for the market. I would love to have Ubuntu 12.04 up and running--Windows 7 does very well with this new processor so Ubuntu should, well, be lightening speed. The trial on the Sleekbook with Ubuntu 12.10 Alpha 2 release was a complete failure. I created a bootable USB. By using either the 'Try Ubuntu' or 'Install Ubuntu' options resulted in the usual purple Ubuntu splash screen, followed by nothing...as in a black screen without any hint of life. Interestingly one can hear the Ubuntu intro sound. In case you are wondering, this same USB was trialed subsequently on another computer with and Intel Atom Processor. Worked flawlessly. Lastly the second trial on the Sleekbook resulted in the same results as the first paragraph. Perhaps 12.10 Beta will overcome this issue, or the finalised 12.10 release in October. I don't have the expertise to know what the cause of the behaviour is-the issue could be something else entirely. Sadly, the Windows 7 performance is very good with this processor-very similar and in some instances better to the 2nd generation Intel i5 based computer I use at my workplace. Whatever the cause is for the performance with Ubuntu 12.04 or 12.10 Alpha 2, the situation doesn't bode well for Ubuntu. Ubuntu aside, the HP Sleekbook is a good performer for the price. I am certain once the Ubuntu issue is worked on and solutions arise, the Ubuntu performance will probably be better than ever.

    Read the article

  • USB blocks suspend on a Gigabyte GA-890GPA-UD3H with ATI SB700/SB800

    - by poolie
    Following on from question 12397, I'd still like to get suspend working on my Phenom II X6 / GA-890GPA desktop machine running current Maverick. When I run pmi action suspend the machine doesn't crash, but it also doesn't suspend. The kernel logs show: PM: Syncing filesystems ... done. PM: Preparing system for mem sleep Freezing user space processes ... (elapsed 0.02 seconds) done. Freezing remaining freezable tasks ... (elapsed 0.01 seconds) done. PM: Entering mem sleep Suspending console(s) (use no_console_suspend to debug) pm_op(): usb_dev_suspend+0x0/0x20 returns -2 PM: Device usb8 failed to suspend async: error -2 PM: Some devices failed to suspend PM: resume of devices complete after 0.430 msecs PM: resume devices took 0.000 seconds PM: Finishing wakeup. Restarting tasks ... done. PM: Syncing filesystems ... I've tried disconnecting all the USB devices, and then connecting in to run pmi over ssh, and I get the same failure. With everything unplugged, I see the following usb devices: Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub and lspci shows the physical devices are: 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:16.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) Booting with no_console_suspend makes no difference.

    Read the article

  • How is software used in critical life-or-death systems tested?

    - by waiwai933
    An airplane, as opposed to, for example, a website, is a system where any failure in certain systems is completely unacceptable, since errors in e.g. flight monitoring can cause the autopilot to malfunction and do a dive. Obviously, this doesn't happen since the brilliant engineers at Boeing and Airbus have checks in the autopilot to make sure it doesn't suddenly decide a dive is a perfectly acceptable and safe maneuver. Or perhaps the computer crashes, and the pilots in the newer fly-by-wire aircraft can no longer actually fly the plane. Of course, there are various safety procedures and redundancies built into these systems to prevent a crash (of both the software and the aircraft). However, on the other hand, it's quite obvious that software isn't perfect—both open source and closed source software do crash regularly, and only the simplest "Hello World" program doesn't fail. How can the engineers who design the software systems in the aeronautic, medical, and other life-or-death industries manage to test their software so that it doesn't fail (and if it does fail, at least fail gracefully)? I'm desperately hoping that you're not all going to go: "Oh, I work for Boeing/Airbus/(some other company) and it's not! Have fun on your next flight/hospital visit."

    Read the article

  • @CodeStock 2012 Review: Leon Gersing ( @Rubybuddha ) - "You"

    "YOU"Speaker: Leon GersingTwitter: @Rubybuddha Site: http://about.me/leongersing I honestly had no idea what I was getting in to when I sat down in to this session. I basically saw the picture of the speaker and knew that it would be a good session. I was completely wrong; it was the BEST SESSION of CodeStock 2012.  In fact it was so good, I texted another coworker attending the conference to get over and listen to Leon. Leon took on the concept of growth in the software development community. He specifically referred David Hansson in his ability to stick to his beliefs when the development community thought that he was crazy for creating Ruby on Rails. If you do not know this story Ruby on Rails is one of the fastest growing web languages today. In addition, he also touched on the flip side of this argument in that we must be open to others ideas and not discard them so quickly because we all come from differing perspectives and can add value to a project/team/community. This session left me with two very profound concepts/quotes: “In order to learn you must do it badly in front of a crowed and fail.” - @Rubybuddha I can look back on my career so far and say that he is correct; I think I have learned the most after failing, especially when I achieved this failure in front of other. “Experts must be able to fail.” - @Rubybuddha I think we can all learn from our own mistakes but we can also learn from others. When respected experts fail it is a great learning opportunity for the entire team as well as the person who failed. When expert admit mistakes and how they worked through them can be great learning tools for other developers so that they know how to avoid specific scenarios and if they do become stuck in the same issue they will know how to properly work their way out of them.

    Read the article

  • Resolution stuck in 640x480 in grub, 11.04 and 12.04

    - by user89797
    I have three operating systems on my machine, Windows 7x64, Ubuntu 11.10 and 12.04 both x64 as well. All three were running at full resolution for my monitor, as well as in the Grub 1.99 boot screen. After booting into Windows, I rebooted my machine and found my Grub resolution was suddenly 640x480. Booting into both versions of Ubuntu, I find myself stuck at that resolution as well. I made no driver changes recently, and hadn't even booted into the 11.10 build in a month or more. I've gone through both proprietary Nvidia driver options for my card (GeForce 9800GT) as well as the open source drivers in 12.04 to no avail. I can't figure out what could have caused this change in both versions of Ubuntu and Grub simultaneously. Windows 7 is unaffected so I think that safely rules out hardware failure. EDIT Ok, so I couldn't boot an graphical live disks, I tried ubuntu 12.04 i386 and x64 as well as 12.10 beta x64 and all of them would flash the initial logo, go to a blank screen with a flashing cursor in the upper left and then my display would die. I managed to boot 12.04 server and get into recovery. I reinstalled grub and went into recovery mode for my 12.04 build. If I boot in safe graphics mode I can get 1280x768, but as soon as I reboot it's broken again. I've tried reinstalling the nvidia drivers and that leaves me with a system stuck at max 640x480. None of these changes have had any impact on the 11.10 build, which is still stuck at 640x480 Given that I can push a somewhat higher resolution in 12.04, and full resolution in windows 7 I'm pretty convinced it's not an issue of my monitor failing. It must be something to do with the graphics drivers. I can't figure out what could be the issue though. I'm especially perplexed that I can't boot any live images

    Read the article

  • CIFS shares do not mount after upgrade to 12.10 from 12.04

    - by Mothball
    I have seen issues close to my problem but no one seems to have a definitive answer as to what is going on and why the failure occurs. I have a number of NAS devices on my home network and on a previous install of 12.04 and version prior mounting at login worked using this entry for each in fstab: //servername/sharename /media/windowsshare cifs guest,uid=1000,iocharset=utf8,codepage=cp850,cp850 0 0 Now when I use this, 12.10 reports the standard - cannot mount bad option ... blah blah... The kern log reports that the CIFS option "codepage" unknown... changed entry to "unicode" and received the same error message. There are no other error messages or log entries that would indicate another issue, but this is the statement I used for quite awhile with version 12.04 and before. Is the codepage option obsolete in 12.10/CIFS now? Is there a codepage support program that I must load? Is there some kind of helper program that is required to supports the codepage option? A current review of the man pages at samba.org does not make mention of the option "codepage". Extremely confused - any help/insight would be greatly appreciated.

    Read the article

  • Philosophy behind the memento pattern

    - by TheSilverBullet
    I have been reading up on memento pattern from various sources of the internet. Differing information from different sources has left me in confusion regarding why this pattern is actually needed. The dofactory implementation says that the primary intention of this pattern is to restore the state of the system. Wiki says that the primary intention is to be able to restore the changes on the system. This gives a different impact - saying that it is possible for a system to have memento implementation with no need to restore. And that ability of restore is a feature of this. OODesign says that It is sometimes necessary to capture the internal state of an object at some point and have the ability to restore the object to that state later in time. Such a case is useful in case of error or failure. So, my question is why exactly do we use this one? Is it to save previous states - or to promote encapsulation between the Caretaker and the Memento? Why is this type of encapsulation so important? Edit: For those visiting, check out this Implementation!

    Read the article

  • Jenkins Paramerized Trigger + Copy Artifact

    - by Josh Kelley
    I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: The Windows portion and Linux portion are set up as separate Jenkins projects. The Windows project is parameterized, taking the Subversion tag to build and release. As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a "build selector" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?

    Read the article

  • Can not login Dashboard / Unable to find the server at mykeystoneurl

    - by neo0
    I installed Dashboard following this guide: http://wiki.openstack.org/OpenStackDashboard Everything fine, but when I run the server, I can not login with the username and password in DATABASE config in local_settings.py. Here's my config: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboarddb', 'USER': 'nova', 'PASSWORD': 'nova', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } When I run the Dashboard server and enter username + password. It returned this error on browser: Unable to find the server at mykeystoneurl (HTTP 400) And in the command line: DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. DEBUG:openstack_dashboard.settings:Running in debug mode without debug_toolbar. Validating models... 0 errors found Django version 1.3.1, using settings 'openstack_dashboard.settings' Development server is running at http://0.0.0.0:8888/ Quit the server with CONTROL-C. Request returned failure status. Traceback (most recent call last): File "/home/us/horizon/.venv/src/python-keystoneclient/keystoneclient/client.py", line 121, in request body = json.loads(body) File "/usr/lib/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded [06/Mar/2012 15:20:03] "POST /auth/login/ HTTP/1.1" 200 3735 I also tried login as "admin" with password is "password" or "secrete" but I didn't work. What's wrong? Thank you!

    Read the article

  • How to install .deb file from within preinst script

    - by Ashwin D
    I have my own application packaged using dpkg. The application depends on several deb files which I'm trying to install from within the preinst script of my application. The preinst script checks if a dependent deb file is installed, if not it goes to installt it using the dpkg -i command. This is repeated for all the dependent deb files needed by the main application. When I try to install the main application using dpkg -i, the commands returns failure when trying to execute the preinst script. Below is that error message. dpkg: error: dpkg status database is locked by another process I deleted /var/lib/dpkg/lock file and retried to install the application. But to no avail. If I run the preinst script separately like any other shell script, it runs without any issue. All the deb files will be installed properly. So, the issue is only when this preinst script is being run automatically by the dpkg -i command. I'm lost trying to determine the root cause. If anyone can shed some light on what the real issue might be, their help will be greatly appreciated. Thank you. Ashwin

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • Problem with dpkg-preconfigure, how to correct?

    - by Eric Wilson
    I was trying to install TeamViewer, and I followed the instructions here even though they specify 11.10 instead of 12.04 (what I'm running). In particular, I executed. $ wget http://www.teamviewer.com/download/teamviewer_linux.deb $ sudo dpkg -i teamviewer_linux.deb The dpkg command failed, and after this point my packaging system has been broken. The software center instructs me to try: $ sudo apt-get -f install which leads to Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7:i386 0 upgraded, 0 newly installed, 1 to remove and 17 not upgraded. 9 not fully installed or removed. Need to get 89.0 kB of archives. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http://us.archive.ubuntu.com/ubuntu/ precise/main dash amd64 0.5.7-2ubuntu2 [89.0 kB] Fetched 89.0 kB in 1s (83.9 kB/s) E: Sub-process /usr/sbin/dpkg-preconfigure --apt || true returned an error code (100) E: Failure running script /usr/sbin/dpkg-preconfigure --apt || true At this point I'm stumped.

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • Lenovo Y460 Intel Driver Secondary Display Flickering

    - by ultimatebuster
    This is a part of the massive dump of problems I'm encountering with my Lenovo Y460 and Ubuntu. Problem: ATI PowerXpress doesn't really work. Doesn't work as I have to use the open source driver with hacks. Turned off ATI card at boot Details on how I accomplished that: http://ubuntuforums.org/showthread.php?p=10955831#post10955831 Installing the ATI drivers results in a failure of the intel drivers to work with Ubuntu Class (all animations have to turned off). Anyway to fix this problem to allow switchable graphics to work? The problem above has been fixed by FGLRX (Catalyst 11.6) is it compatible with kernel 2.6.39? However, there's another issue. If I connect my secondary monitor (VGA 17'') while using the Intel driver, I would not be able to use that screen as there's flickering and tearing, making the screen blurry and usable. Here's the fglrxinfo: $ fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile GEM 20100330 DEVELOPMENT OpenGL version string: 1.4 (2.1 Mesa 7.10.2) Any fixes for that? Potential related bug report on launchpad: https://bugs.launchpad.net/ubuntu/+source/xserver-xorg-video-intel/+bug/750259 However I can't confirm because the video showing that is much more dramatic than what I have, mine are tiny flickering that won't be captured by video cameras as I've tried, but enough to make it blurry for humans.

    Read the article

  • Why doesn't it seem to be any development in the field of 3D VR gear, especially with regard to gaming?

    - by neuviemeporte
    I remember that way back around 1995, there was this big craze with VR in the media, a whole bunch of (mostly mediocre) games labeled as "virtual-reality-interactive-movie (...)" were published. If I recall correctly, the first 3D VR helmet was called VFX-1 and was sold bundled with Descent and some dedicated joystick. I never owned one, and I read just one review which was mostly enthusiastic, but pointed to some weak points, like the eyes getting tired after an hour or so of playing. Then the whole thing basically flickered down and died. I suppose the main reason it wasn't successful was that the hardware of the day was not powerful enough, the VR gear's design wasn't perfected to make it comfortable and natural to use, and the companies that made it failed to market it successfully. What I can't understand is why isn't there any development in the field today. There is some vr-ish hardware mostly targeted at the consoles (Kinect, Wii remote, TrackIR), but all projects of creating some 3d head-mounted display system seem to be in early infancy, appear once in a trade show somewhere and aren't heard of again. I think it could work great with head tracking and some of today's shooters, flight sims (trackIR is nice but the movement scale translation is awkward) and other games with an FPP POV. Is there any technological reason why decent vr headgear can't be made today, or is it just that nobody really cares/everyone is scared to repeat the '90s failure?

    Read the article

  • when I type apt-get -f install, I get the error message

    - by gene
    xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. Also I can not upgrade my software, It said that the package system is broken, with detail information: The following packages have unmet dependencies: xserver-xorg-core: Depends: xserver-common (>= 2:1.11.4-0ubuntu10.8) but 2:1.11.4-0ubuntu10.8 is installed when I issue sudo apt-get update, the output seems fine the source is(sorry the output has too many links that I can not post in);http://archive.ubuntu.com Reading package lists... Done ====================== when I issue sudo apt-get dist-upgrade, the output is: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: xserver-xorg-core : Breaks: xserver-xorg-video-5 E: Unmet dependencies. Try using -f. ================== when I issue 'sudo apt-get -f install', the output is: dpkg: dependency problems prevent configuration of xserver-xorg-video-radeon: xserver-xorg-core (2:1.11.4-0ubuntu10.8) breaks xserver-xorg-video-5 and is installed. xserver-xorg-video-radeon (1:6.12.1-0ubuntu2) provides xserver-xorg-video-5. dpkg: error processing xserver-xorg-video-radeon (--configure):dependency problems leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: xserver-xorg-video-radeon E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-26

    - by Bob Rhubart
    Oracle Introduces Free Version of Oracle Application Development Framework Several community bloggers have already written about Oracle Application Development Framework (ADF) Essentials, the free version of Oracle ADF. Here's the official press release. ADF Essentials - Quick Technical Review | Andrejus Baranovskis "This post is just a quick review for ADF Essentials on Glassfish," says Oracle ACE Director Andrejus Baranovskis. "I will do a proper performance test soon to compare ADF performance on 5 ways to think like a cloud architect | ZDNet "Is enterprise architecture ready for the cloud? Is the cloud ready for EA?" Joe McKendrick asks. "Cloud represents a different way of thinking. But we've been here before." Configuring trace file size and number in WebCenter Content 11g | Kyle Hatlestad A quick tip from Oracle Fusion Middleware A-Team member Kyle Hatlestad. Thought for the Day "Elegance is not a dispensable luxury but a factor that decides between success and failure." — Edsger W. Dijkstra (May 11, 1930 – August 6, 2002) Source: SoftwareQuotes.com

    Read the article

  • Two graphical entities, smooth blending between them (e.g. asphalt and grass)

    - by Gabriel Conrad
    Supposedly in a scenario there are, among other things, a tarmac strip and a meadow. The tarmac has an asphalt texture and its model is a triangle strip long that might bifurcate at some point into other tinier strips, and suppose that the meadow is covered with grass. What can be done to make the two graphical entities seem less cut out from a photo and just pasted one on top of the other at the edges? To better understand the problem, picture a strip of asphalt and a plane covered with grass. The grass texture should also "enter" the tarmac strip a little bit at the edges (i.e. feathering effect). My ideas involve two approaches: put two textures on the tarmac entity, but that involves a serious restriction in how the strip is modeled and its texture coordinates are mapped or try and apply a post-processing filter that mimics a bloom effect where "grass" is used instead of light. This could be a terrible failure to achieve correct results. So, is there a better or at least a more obvious way that's widely used in the game dev industry?

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Ubuntu One has high CPU usage but no syncing

    - by Peter
    over the weekend I updated my computer to Windows 8. So far Ubuntu One was running smoothly, but ever since the update (clean, new install) Ubuntu doesn't sync any more. In Windows 7 it would start to sync at full internet speed as soon as I drop a file. But now in Windows 8, as soon as I drop a new file into the Ubuntu One folder, CPU usage goes up to about 50 % and no network traffic occurs. This stays like that for a couple of minutes, CPU usage goes back to normal and then the client says that all is in sync - which isn't true. Is it too early for Windows 8? Do others have the same problem or is there something I can do about it? I try a couple of different things, and realized that if the file size is 20 MB the files get uploaded. The original file was 1.5 GB. I also didn't work with 200, 100 and 50 MB large files. But even with 20 MB large files, the upload is very slow and not steady. The log give plenty of this error: - twisted - ERROR - Failure: exceptions.TypeError About which I don't know the meaning. By the way, the account works just fine on the Ubuntu 12.04 partition. Any help is greatly appreciated. -Peter

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >