Search Results

Search found 21141 results on 846 pages for 'old mac'.

Page 571/846 | < Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >

  • my LaCie 500 Gb not mounted on 11.10

    - by pooo
    My external USB drive was recognized with 10.x versions of Ubuntu but since 11.x I am getting stuck, I had tried everything I read in forums but still the same error: 4956.401052] usb 2-1.4: new high speed USB device number 14 using ehci_hcd [ 4956.539216] scsi14 : uas [ 4956.740955] scsi 14:0:0:0: Direct-Access LaCie Rugged FW USB3 1081 PQ: 0 ANSI: 4 [ 4963.256055] scsi 14:0:0:0: uas_eh_abort_handler tag 0 [ 4963.256076] scsi 14:0:0:0: uas_eh_device_reset_handler tag 0 [ 4963.256085] scsi 14:0:0:0: uas_eh_target_reset_handler tag 0 [ 4963.256091] scsi 14:0:0:0: uas_eh_bus_reset_handler tag 0 [ 4963.328122] usb 2-1.4: reset high speed USB device number 14 using ehci_hcd [ 4963.468743] scsi 14:0:0:0: Device offlined - not ready after error recovery [ 4963.468813] scsi 14:0:0:0: rejecting I/O to offline device [ 4963.468831] scsi 14:0:0:0: rejecting I/O to offline device [ 4963.469204] scsi 14:0:0:1: uas_sense_old: urb length 26 disagrees with IU sense data length 510, using 18 bytes of sense data [ 4963.512104] sd 14:0:0:0: Attached scsi generic sg3 type 0 [ 4994.253779] sd 14:0:0:0: uas_eh_abort_handler tag 0 [ 4994.253802] sd 14:0:0:0: uas_eh_device_reset_handler tag 0 [ 4994.253809] sd 14:0:0:0: uas_eh_target_reset_handler tag 0 [ 4994.253815] sd 14:0:0:0: uas_eh_bus_reset_handler tag 0 [ 4994.325880] usb 2-1.4: reset high speed USB device number 14 using ehci_hcd [ 4994.466488] sd 14:0:0:0: Device offlined - not ready after error recovery [ 4994.466555] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466573] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466582] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466588] sd 14:0:0:0: [sdc] READ CAPACITY failed [ 4994.466593] sd 14:0:0:0: [sdc] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 4994.466600] sd 14:0:0:0: [sdc] Sense not available. [ 4994.466608] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466616] sd 14:0:0:0: [sdc] Write Protect is off [ 4994.466622] sd 14:0:0:0: [sdc] Mode Sense: 00 00 00 00 [ 4994.466629] sd 14:0:0:0: rejecting I/O to offline device [ 4994.466635] sd 14:0:0:0: [sdc] Asking for cache data failed [ 4994.466640] sd 14:0:0:0: [sdc] Assuming drive cache: write through [ 4994.467003] sd 14:0:0:0: [sdc] Attached SCSI disk if I am trying on an old ubuntu, the drive is mounted,

    Read the article

  • Migrating Forms to Java or ADF, the truth and no FUD

    - by Grant Ronald
    The question about migrating Forms to Java (or ADF or APEX) comes up time and time again.  I wanted to pull some core information together in a single blog post to address this question. The first question I always ask is "WHY" - Forms may still be a viable option for you so "if it ain't broke don't fix it".  Bottom line is whatever anyone tells you, its going to be a considerable effort and cost to migrate from Forms to something else so the business is going to want to know WHY you spend all those hard earned dollars switching from something that might have been serving you quite adequately. Second point, if you are going to switch, I would encourage you NOT to look at building a Forms clone.  So many times I see people trying to build an ADF application and EXACTLY mimic the Forms model - ADF is NOT a Forms clone.  You should be building to the sweet spot of your target technology, not your 20 year old client/server technology.  This is also the chance for the business to embrace change, so maybe look at new processes, channels and technology options that weren't available when you first developed your Forms applications. To help you understand what is involved, I've put together a number of resources. Thinking about migration of Forms to Java, ADF or APEX, read this to prepare yourself Oracle Forms to ADF: When, Why and How - this gives you an overview of our vision, directly from Oracle Product Management Redeveloping a Forms Application with Oracle JDeveloper and Oracle ADF.  This is a conference session from myself and Lynn Munsinger on how ADF can be used in a Forms migration/rewrite As someone who manages both Forms and ADF Product Management teams, I've a foot in either camp and am happy to see you use either tool.  However, I want you to be able to make an informed decision.  My hope is that there information sources will help you do that.

    Read the article

  • Kubuntu 12.04 - DNS Issues

    - by AndrewJesaitis
    Starting yesterday (6/11/12), I've been having many network problems. When requesting a page in chrome, the page hangs on "Sending request" and then will eventually timeout. I'm within a VPN that has it's own DNS server. I've tried to manually set my DNS through the Network-Manager applet and by editing /etc/network/interfaces. Having no luck I unlinked the resolv.conf file and dumped the contents of my old resolv.conf into it. Again having no luck, I deactivated the dnsmasq server in /etc/NetworkManager/NetworkManager.conf by commenting out the dns=dnsmasq. $ cat NetworkManager.conf [main] plugins=ifupdown,keyfile #dns=dnsmasq no-auto-default=D0:67:E5:EA:B6:6B, [ifupdown] managed=false $ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: D0:67:E5:EA:B6:6B Capabilities: Carrier Detect: yes Speed: 1000 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.254.122 Prefix: 24 (255.255.255.0) Gateway: 192.168.254.2 DNS: 192.168.254.1 What is strange is that the network will work fine for a few minutes then start to timeout. A few minutes later it will work again. I'm unable to hit internal or external sites when it is timing out. When I $dig local sites, I receive no answer. I do receive an answer from google.com. At this point, I would usually blame the DNS Server, especially since when I change to Google's DNS server things work. But, I need to use our internal DNS to hit our internal sites. Nobody else is having issues and they are all using DHCP. This group includes one user who is using 11.04. At this point, I'm at a loss for what to do, so any help would be appreciated.

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • DIA2012

    - by Chris Kawalek
    If you've read this blog before, you probably know that Oracle desktop virtualization is used to demonstrate Oracle Applications at many different trade shows. This week, the Oracle desktop team is at DIA2012 in Philadelphia, PA. The DIA conference is a large event, hosting about 7,000 professionals in the pharmaceutical, bio technology, and medical device fields. Healthcare and associated fields are leveraging desktop virtualization because the model is a natural fit due to their high security requirements. Keeping all the data on the server and not distributing it on laptops or PCs that could be stolen makes a lot of sense when you're talking about patient records and other sensitive information. We're proud to be supporting the Oracle Health Sciences team at DIA2012 by hosting all of the Oracle healthcare related demos on a central server, and providing simple, smart card based access using our Sun Ray Clients. And remember that you're not limited to using just Sun Ray Clients--you can also use the Oracle Virtual Desktop Client and freely move your session from your iPad, your Windows or Linux PC, your Mac, or Sun Ray Clients. It's a truly mobile solution for an industry that requires mobile, secure access in order to remain compliant. Here are some pics from the show: We also have an informative PDF on Oracle desktop virtualization and Oracle healthcare that you can have a look at.  (Many thanks to Adam Workman for the pics!) -Chris  For more information, please go to the Oracle Virtualization web page, or  follow us at :  Twitter   Facebook YouTube Newsletter

    Read the article

  • I need advice on laptop purchase for university [closed]

    - by Systemic33
    I'm currently in University studying Computer Science/IT/Information Technology. And this first year i've managed to do with the laptop I had; an ASUS Eee PC 1000H with a 10.1" screen. But this is getting way too underpowered and small for programming more than just quick programming introduction excercises. So I'm looking to buy a more suitable laptop. It's not supposed to be a desktop replacement though, since I've got a pretty good desktop already with a 24" monitor. So the kinda laptop I want to buy is one suited for university. If this bears any significance, I'm working in Java atm, but I will likely work with lots of other things incl. web development. I'm looking to spend about $1700 plus/minus. And it should be powerful/big enough for working on programming projects as well as the usual university stuff like MATLAB, Maple, etc out "in the field", and sometimes for maybe a week when visiting my parents. What I'm looking at right now is the ASUS Zenbook UX31A with the 1920 x 1080 resolution on 13.3" IPS display. But I'm kinda nervous that this will be too petite for programming. In essence i'm looking for a powerfull computer, that has good enough battery, and looks good. I would love suggestions or any type of feedback, either with maybe a better choice, or input on how its like programming on 13" laptops. Very much thanks in advance for anyone who even went through all that! PS. I don't want a mac, or my inner karma would commit Seppuku xD But experiences from working on the 13" Macbook Air would kinda be equivalent to the Zenbook i'm considering, so I would love to hear that. tl;dr The quick brown fox jumps over the lazy dog ;)

    Read the article

  • can't access SAMBA shares on UBUNTU-server from other computers

    - by larand
    Installed UBUNTU-server 12.04 and configured /etc/samba/smb.conf as: #======================= Global Settings ======================= [global] workgroup = HEMMA server string = %h server (Samba, Ubuntu) security = user wins support = yes dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = no passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ############ Misc ############ usershare allow guests = yes #======================= Share Definitions ======================= [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no [Bilder original] comment = Original bilder path = /mnt/bilder/org browseable = yes read only = no guest ok = no create mask = 0755 [Bilder publika] comment = Bilder för allmän visning path = /mnt/bilder/public browseable = yes read only = yes guest ok = yes [Musik] comment = Musik path = /mnt/music/public browseable = yes read only = yes guest ok = yes I have a network setup around a 4G router "HUAWEI B593" where some computers are connected by WIFI and others by LAN. The server is connected by LAN. On one computer running windows XP I can see the server but are not allowed to acces them. On another computer on the WIFI-net running win7 I cannot see the server at all but I can ping the server and I can see the smb-protocoll is running when sniffing with wireshark. I don't primarily want to use passwords, computers on the lan and wifi should be able to connect without any login-procedure. I'm sure my config is not sufficient but have hard to understand how I should do. Theres a lot of descriptions on the net but most is old and none have been of any help. I'm also confused by the fact that I can not se the sever on my win7-machine even though it communicates with the samba-server. Would be very happy if anyone could spread some light over this mess.

    Read the article

  • Bug fix for Eclipse runtime plugin

    - by Peter Benedikovic
    This blog is intended to inform about bug fix that solves this issue. Before continuing further, one important note – the linux and mac users do not need to read further because this bug appears only on Windows.  The problem was that the runtime plugin registered new runtime and server each time the Eclipse started. Users ended up with server view looking like this: I have created new runtime plugin which is now available at the update site http://download.java.net/glassfish/eclipse/indigo (or the same ending with juno for Juno users). You will still need to unistall the buggy plugin and (optionally but recommended) to remove runtimes created by this plugin. Here is the guide how to install bugfix: Uninstall buggy runtime plugin via menu Help->About Eclipse->Installation details. Remove runtimes created by old plugin – via Window->Preferences->Server->Runtime Environment. After pressing remove button you may be asked if you want to remove also the servers based on runtime being removed. Recommended is to do so. Now you can install new runtime plugin. Go to Help->Install New Software. You may ask why I haven‘t provided the update for buggy runtime which could be installed via Check for updates feature of Eclipse. It has two main reasons: The bug fix is needed only for Windows users so I didn't want to bother other users by updating working plugin. The runtime plugin has had structure that was not quite suitable for Eclipse update. This structure is now changed so future bugs (I am sure that there will be no such ;)) can be fixed by standard update. Have a good one!

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • Cloning existing software for commercial purposes - legal implications

    - by user2036256
    I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox. The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we should know about everyone there). What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch. Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided? Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies? Thanks

    Read the article

  • Would adding award points or game features to workplace software be viewed poorly amongst the programming community?

    - by Eric P
    So one of my responsibilities at work is to build an internal tool that helps the workers enter in all their information. It's an enterprise application that is similar to a Windows forms database tool. So it's not much different than like developing a Word + Excel combo application, but the average person in this workgroup is a 20-40 year old woman or a random chatty male type. Plus I know all of these people are heavily involved with Facebook on a daily basis. How bad would it be if I styled my new interface to be similar to what Facebook does. People could get award points and stuff when they fill out different types of forms and basically compete against each other like it was a game. When people had completed one, it would be posted on their wall and everyone could comment/like stuff just like in Facebook. And it would be like they are doing peer reviewing for fun. The rewards would be outstanding I would imagine. These people are so into Facebook and Facebook games that productivity would rise due to them trying to compete and earn points and achievements. Would this be taking advantage of the people by 'tricking them into working harder by giving them a game' or would it be viewed as something that would improve happiness at work?

    Read the article

  • Installer gets stuck with a grayed out forward button.

    - by TRiG
    I have a CD with Ubuntu 10.10 and a laptop with Ubuntu 8.10. The laptop had all sorts of crud on it, and anything I wanted to keep was backed up on an external drive, so I was happy to do a wipe and reinstall instead of an update. So after a bit of faffing about trying to work out how to get the thing to boot from the CD drive, I did that. So the screen comes up with the choice: the options are Try Ubuntu and Install Ubuntu. I choose to install and to overwrite my current installation. So far so good. I then get a progress bar labelled something like copying files (I forget the exact wording) and further options to fill in for my location, keyboard locale, username and password. On each of these screens there are forward and back buttons. On the last screen (password), the forward button is greyed out. Well, I think to myself, no doubt it will become active when that copying files progress bar completes. The progress bar never completes. It hangs. And the label changes from copying files to the chirpy ready when you are. The forward button remains greyed out. The back button is as unhelpful as you'd expect it to be. And there's nothing else to click. We have reached an impasse. I tried restarting the laptop, to test whether it actually was properly installed. It wasn't. I tried to run Ubuntu live from the CD, to test whether the disk was damaged. That wouldn't work either, but I suspect it's just because the laptop is old and has a slow disk drive. I'm typing this question on another computer using the Ubuntu live CD and it's working fine. So there's nothing wrong with the CD.

    Read the article

  • Modernising settings, packages

    - by Sam Brightman
    The update manager (possibly combined with the janitor) does a reasonable job of bringing packages up to date with a new release, removing ones that are replaced by different projects etc. However, I'm left with the lingering feeling that quite a few settings are lingering from old releases. For example, some packages may be left around that I installed myself whereas now the functionality is provided by default. Another example is that my user doesn't get the new theme, and the panel bar is a mess. I can compare against an inactive user on the same system: everything seems tidier. There are also things like the explosion of System Preferences, user groups (inactive user, more recently created, is in groups that the older, active user isn't). In other areas (e.g. default font) I do seem to get given the new defaults. Another example is Spotlight-equivalent search. I remember Beagle and Tracker, I remember removing tracker when it used all system RAM and swap for 2 entire release cycles, but I don't know what I'm "supposed" to be using now. Is there even a default indexing-search installed and exposed? aptitude install ubuntu-desktop doesn't do anything, so the basics are in place package-wise. Is there any way to update my settings to the modern "Ubuntu way" without reinstalling from scratch? Can I do so selectively i.e. show the differences? Most of the time package management on Linux is an absolute joy compared to the alternatives, but if the desktop gets messed up after only a release or two, we're back to reinstalling just like Windows.

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • Should I start making connections even if I'm not ready for a job yet?

    - by James
    The first job is always the hardest to get and I'm not exception. I'm 23 years old and I have no college degree but planned on going to college this year if all goes well (CS of course). I'm self-studying java right now. I know most of the topics related to the language besides the more advanced topics and I'm beginning to look at open source projects. I would like to find a job (at least a part time job) after a year or two when I'll gain more experience and learn more about java technologies and other technologies that interest me. Finding a job will be a bit difficult because most of the people (or a lot of them at least) at my current age already have 2 years or more of experience, so I will be somewhat disadvantaged. Should I start building connections and joining websites such as linkedin ? I never bothered to look into it because I'm not much of a social network person. If I start contributing to open source projects and create personal projects for 2 years could I apply for jobs that require 1-2 years of experience? Does this experience count ?

    Read the article

  • Which shopping cart / ecommerce platform to choose?

    - by fabien7474
    I need to build an ecommerce website within a tight budget and schedule. Of course, I have never done that before, so I have googled out what my solutions are and I have concluded that the following were not valid candidates anymore : Magento : Steep learning curve osCommerce : old, bad design, buggy and not user-friendly Zencart, CRE Loaded, CubeCart : based on osCommerce Virtuemart, uberCart, eCart : based on CMS (Joomal, Drupal, WordPress) that is not necessary for my use-case So I finally narrowed down my choices to these solutions : PrestaShop : easy-to-use, great templating engine (smarty) but many modules are not free buy yet indispensable OpenCart : security issues and not a great support from the main developer. See here and here. So, as you can see, I am a little bit confused and if you can help me choosing an easy-to-use, lightweight and cheap (not-necessarily free) ecommerce solution, I would really appreciate. By the way, I am a Java/Grails programmer but I am also familiar with PHP and .NET. (not with Python or Ruby/Rails) EDIT: It seems that this question is more appropriate for the Webmaster StackExchange site. So please move this question to where it belongs (I cannot do that) instead of downvoting it. BTW, I have found out a question quite similar on SO (http://stackoverflow.com/questions/3315638/php-ecommerce-system-which-one-is-easiest-to-modify) which is quite popular.

    Read the article

  • Cross-Platform Migration using Heterogeneous Data Guard

    - by Roy F. Swonger
    Most people think of Data Guard as a disaster recovery solution, and it certainly excels in that role. However, did you know that you can also use Data Guard for platform migration under some conditions? While you would normally have your primary and standby Data Guard systems running on the same OS and hardware platform, there are some heterogeneous combinations of primary and stanby system that are supported by Data Guard Physical Standby. One example of heterogenous Data Guard support is the ability to go between Linux and Windows on many processor architectures. Another is the support for environments that are running HP-UX on both PA-RIsC and Itanium hardware. Brand new in 11.2.0.2 is the ability to have both SPARC Solaris and IBM AIX on Power Systems in the same Data Guard environment. See My Oracle Support note 413484.1 for all the details about supported platform combinations. So, why mention this in an upgrade blog? Simple: much of the time required for a platform migration is usually spent copying files from one system to another. If you are moving between systems that are supported by heterogenous Data Guard, then you can reduce that migration downtime to a matter of minutes. This can be a big win when downtime is at a premium (and isn't downtime always at a premium? In addition, you get the benefit of being able to keep the old and new environments synchronized until you are sure the migration is successful! A great case study of using Data Guard for a technology refresh is located on this OTN page. The case study showing CERN's methodology isn't highlighted as a link on the overview page, but it is clickable. As always, make sure you are fully versed on the details and restrictions by reading the available documentation and MOS notes. Happy migrating!

    Read the article

  • Double audio cd ripping weirdness

    - by jqno
    Since I installed Ubuntu 12.04, Rhythmbox, Banshee and Sound Juicer have started acting weird around double cd's, and specifically, cd #2 of said double cd. Sometimes, they will show the information of cd #1. Track names, durations, and even count are incorrect. Sometimes, they will first show the tracks for cd #1, then continue onto cd #2 if cd #2 has more tracks than #1. Sound Juicer seems to be unable to find any track durations at all, even for single cd's. Obviously, this is a pain when I'm trying to rip double cd's. And I have a fair number of them, which I want to rip. This happens on both my machines (a slightly aging iMac, and a 1-year-old Sony Vaio). However, on previous versions of Ubuntu, this never happened. All on the same machines. So I suspect 12.04 is using a different lib for extracting audio cd data. Just for kicks, I tried with Linux Mint 13, and there it works correctly, even though it claims to be based on Ubuntu 12.04 and therefore should be using (partially) the same software. So if the Mint guys can fix it, I should be able to do it too, right? So, my question: what changed in 12.04 that could cause this? And more importantly: what can I do to fix it?

    Read the article

  • Which one is better offline method for large scale application

    - by Manish Pansiniya
    We've a big data management website used by several property. Some of our customers have downtime (they can't access net for an hour or two). We want our site to support offline data viewing and inventory management (typical data search and add/remove) and when the user goes online we can sync the changes back to our central database. Customers use several platforms like Windows, iOS, etc. We've been looking into several different options, here are the major choices - Develop offline web app supported in HTML5. Develop a 'fallback' mechanism and interact with data from the app cache as explained in here (http://www.htmlgoodies.com/html5/tutorials/introduction-to-offline-web- applications-using-) Develop a desktop based cross platform solution. I remember the old MONO which has been popular. Here's a post (What do you suggest for cross platform apps, including web cross-platform-apps-including-web) and another one (Technology choice for cross platform development (desktop and phone)? platform-development-desktop-and-phone?rq=1) I realize the the desktop solution might be hard to maintain and result in some compatibility issues and demand test environments. Can HTML5 handle moderate to high level complexity and data flow? Or would it be better to rely on a desktop based app for better scalability & performance?

    Read the article

  • Lost files after installing Ubuntu

    - by Joshua Rosato
    I installed Ubuntu on my laptop over windows, I had 2 partitions on one hard disk. It seems like my second partition is gone with all my files. How can I recover the old files? They weren't on the same partition as Windows. I read that the partition has probably just not been mounted so ran sudo fdisk -l to find all the partitions and then ran sudo mount, however I can't tell from the results of sudo mount what is not mounted and I am also unsure how to mount it once I find the unmounted partition. sudo fdisk -l - Results Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002c6dc Device Boot Start End Blocks Id System /dev/sda1 * 2048 486322175 243160064 83 Linux /dev/sda2 486324222 488396799 1036289 5 Extended /dev/sda5 486324224 488396799 1036288 82 Linux swap / Solaris sudo mount - Results /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=joshy1)

    Read the article

  • CEO Taken Captive in His Own Factory?

    - by Stephen Slade
    Last Friday was no ordinary day for Chip Starnes, the 42 year old factory owner of Specialty Medical Supplies in China. He recently announced movement of some of the production of their diabetes testing equipment from Beijing to Mumbai India.  Of the 110 employees at the facility, about 80 protested by blocking the doors and refusing to let Chip Starnes out of the facility.  He has been trapped in his office several days now.  The employees think the factory was closing but Mr. Starnes said it was not. Mis-information? Poor communications? Work-stoppage. This is a good example of supply chain disruption. Parked cars are blocking the entrance to the facility, front gates are chained close, the CEO a prisoner in his own factory. Chip Starnes was presented with documents to sign in Chinese indicating he would pay severance and other demands he did not understand, possibly bankrupting the company.    If you depend on supply from China and other foreign suppliers, how reliable are your sources? For example how are the shopfloor employee relations? Is it possible to predict these types of HR risks and plan around them? What are your contingencies? It's important to ask the right questions and hear good answers. Having tools in place to rapidly evaluate, assess and react to these disruptions are the keys to survival. Hear how leading organizations are reinforcing their supply chains and mitigating risk through technology with Oracle's latest release of Oracle Supply Chain Management. Source: WSJ pg.B1, June 25, 2013

    Read the article

  • Why would 70-persistent-net.rules have no effect?

    - by Wes Felter
    I've got a saucy server with a lot of NICs and they end up with weird names like "rename19". I know interface names can be changed by modifying the /etc/udev/rules.d/70-persistent-net.rules file. The first clue that something is wrong is that that file did not exist even though it's supposed to be created automatically. So I decided to write my own based on advice from Linux From Scratch: ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.0", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.1", NAME="eth1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.2", NAME="eth2" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:06:00.3", NAME="eth3" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.0", NAME="mezz0" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:0c:00.1", NAME="mezz1" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.0", NAME="slot1a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:1b:00.1", NAME="slot1b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.0", NAME="slot2a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:20:00.1", NAME="slot2b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.0", NAME="slot3a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:11:00.1", NAME="slot3b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.0", NAME="slot4a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:8b:00.1", NAME="slot4b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.0", NAME="slot5a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:90:00.1", NAME="slot5b" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.0", NAME="slot6a" ACTION=="add", SUBSYSTEM=="net", BUS=="pci", KERNELS=="0000:95:00.1", NAME="slot6b" (I'm matching on PCI IDs instead of MAC addresses because I have multiple identical machines that I want to apply this configuration to.) After rebooting, nothing has changed. It's like these rules aren't even being read. There's not much going on in dmesg either: $ dmesg | grep udev [ 3.196629] systemd-udevd[323]: starting version 204 [ 6.719140] systemd-udevd[550]: starting version 204 [ 38.695050] init: udev-fallback-graphics main process (1658) terminated with status 1

    Read the article

  • 2D platformers: why make the physics dependent on the framerate?

    - by Archagon
    "Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast. Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.) What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine? Thank you, and sorry if the question was confusing.

    Read the article

  • Data Pump: Consistent Export?

    - by Mike Dietrich
    Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent. Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research": MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent Background:Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode To get a consistent data pump export with expdp you'll have to set: FLASHBACK_TIME=SYSTIMESTAMPin your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact. So sorry if I had proclaimed a feature which unfortunately is not there by default - Mike

    Read the article

  • Debugging Windows Service Timeout Error 1053

    - by Joe Mayo
    If you ever receive an Error 1053 for a timeout when starting a Windows Service you've written, the underlying problem might not have anything to do with a timeout.  Here's the error so you can compare it to what you're seeing: --------------------------- Services --------------------------- Windows could not start the Service1 service on Local Computer.   Error 1053: The service did not respond to the start or control request in a timely fashion. --------------------------- OK   --------------------------- Searching the Web for a solution to this error in your Windows Service will result in a lot of wasted time and advice that won't help.  Sometimes you might get lucky if your problem is exactly the same as someone else's, but this isn't always the case.  One of the solutions you'll see has to do with a known error on Windows Server 2003 that's fixed by a patch to the .NET Framework 1.1, which won't work.  As I write this blog post, I'm using the .NET Framework 4.0, which is a tad bit past that timeframe. Most likely, the basic problem is that your service is throwing an exception that you aren't handling.  To debug this, wrap your service initialization code in a try/catch block and log the exception, as shown below: using System; using System.Diagnostics; using System.ServiceProcess; namespace WindowsService { static class Program { static void Main() { try { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Service1() }; ServiceBase.Run(ServicesToRun); } catch (Exception ex) { EventLog.WriteEntry("Application", ex.ToString(), EventLogEntryType.Error); } } } } After you uninstall the old service, redeploy the service with modifications, and receive the same error message as above; look in the Event Viewer/Application logs.  You'll see what the real problem is, which is the underlying reason for the timeout. Joe

    Read the article

< Previous Page | 567 568 569 570 571 572 573 574 575 576 577 578  | Next Page >