Search Results

Search found 29317 results on 1173 pages for 'device control'.

Page 688/1173 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • mediatomb fails with "respawning too fast, stopped"

    - by felix
    When I try to start mediatomb it fails. I see this in dmesg [...] [916349.374331] init: mediatomb main process ended, respawning [916349.394462] init: mediatomb main process (880) terminated with status 1 [916349.394512] init: mediatomb main process ended, respawning [916349.414598] init: mediatomb main process (882) terminated with status 1 [916349.414647] init: mediatomb respawning too fast, stopped My current /etc/init/mediatomb.conf looks like this. description "MediaTomb UPnP media server" author "Daniel van Vugt <vanvugt in launchpad>" start on (local-filesystems and net-device-up IFACE!=lo) stop on runlevel [!2345] respawn env CONFIGXML=/etc/mediatomb/config.xml env LOGFILE=/var/log/mediatomb.log env DEFAULT=/etc/default/mediatomb script [ -r $DEFAULT ] && . $DEFAULT [ ! $USER ] && USER=root [ ! $GROUP ] && GROUP=$USER if [ -n "$INTERFACE" ]; then INTERFACE_ARG="-e $INTERFACE" $ROUTE_ADD $INTERFACE fi exec mediatomb \ -c $CONFIGXML \ -u $USER \ -g $GROUP \ -l $LOGFILE \ $INTERFACE_ARG \ $OPTIONS end script post-stop script [ -r $DEFAULT ] && . $DEFAULT if [ -n "$INTERFACE" ]; then $ROUTE_DEL $INTERFACE fi end script

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • XNA - Error while rendering a texture to a 2D render target via SpriteBatch

    - by Jared B
    I've got this simple code that uses SpriteBatch to draw a texture onto a RenderTarget2D: private void drawScene(GameTime g) { GraphicsDevice.Clear(skyColor); GraphicsDevice.SetRenderTarget(targetScene); drawSunAndMoon(); effect.Fog = true; GraphicsDevice.SetVertexBuffer(line); effect.MainEffect.CurrentTechnique.Passes[0].Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); GraphicsDevice.SetRenderTarget(null); SceneTexture = targetScene; } private void drawPostProcessing(GameTime g) { effect.SceneTexture = SceneTexture; GraphicsDevice.SetRenderTarget(targetBloom); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, null, null, null); { if (Bloom) effect.BlurEffect.CurrentTechnique.Passes[0].Apply(); spriteBatch.Draw( targetScene, new Rectangle(0, 0, Window.ClientBounds.Width, Window.ClientBounds.Height), Color.White); } spriteBatch.End(); BloomTexture = targetBloom; GraphicsDevice.SetRenderTarget(null); } Both methods are called from my Draw(GameTime gameTime) function. First drawScene is called, then drawPostProcessing is called. The thing is, when I run this code I get an error on the spriteBatch.Draw call: The render target must not be set on the device when it is used as a texture. I already found the solution, which is to draw the actual render target (targetScene) to the texture so it doesn't create a reference to the loaded render target. However, to my knowledge, the only way of doing this is to write: GraphicsDevice.SetRenderTarget(outputTarget) SpriteBatch.Draw(inputTarget, ...) GraphicsDevice.SetRenderTarget(null) Which encounters the same exact problem I'm having right now. So, the question I'm asking is: how would I render inputTarget to outputTarget without reference issues?

    Read the article

  • Register for a free webcast presented by ISC2: Identity Auditing Techniques for Reducing Operational Risk and Internal Delays

    - by Darin Pendergraft
    Join us tomorrow, June 26 @ 10:00 am PST for Part 1 of a 3 part security series co-presented by ISC2 Part 1 will deal focus on Identity Auditing techniques and will be delivered by Neil Gandhi, Principal Product Manager at Oracle and Brandon Dunlap, Managing Director at Brightfly Register for Part 1: Identity Auditing Techniques for Reducing Operational Risk and Internal Delays ... Part 2 will focus on how mobile device access is changing the performance and workloads of IDM directory systems and will be delivered by Etienne Remillon, Senior Principal Product Manager at Oracle, and Brandon Dunlap, Managing Director at Brightfly Register for Part 2: Optimizing Directory Architecture for Mobile Devices and Applications ... Finally, Part 3 will focus on what you need to do to support native mobile communications and security protocols and will be presented by Sid Mishra, Senior Principal Product Manager at Oracle, and Brandon Dunlap, Managing Director at Brightfly. Register for Part 3: Using New Design Patterns to Improve Mobile Access Control Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Data Networks Visualized via Light Paintings [Video]

    - by ETC
    All around you are wireless data networks: cellular networks, Wi-Fi networks, a world of wireless communication. Check out this awesome video of network signals mapped over a cityscape. What would happen if you made a device that allowed you to map signal strength onto film? In the following video electronics tinkerers craft an LED meter and use it to paint onto long exposure photographs with phenomenal results. Immaterials: light painting Wi-Fi [via Make] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • Test a simple multi-player (upto four players) Android game in single developer machine

    - by Kush
    I'm working on a multi-player Android game (very simple it is that it doesn't have any game-engine used). The game is based on Java Socket. Four devices will connect the game server and a new thread will manage their session. The game server will server many such sessions (having 4 players each). What I'm worried about is the testing of this game. I know it is possible to run multiple android emulators, but my development laptop is very limited in capabilities (3 GB RAM, 2 Ghz Intel Core2Duo and on-board Graphics). And I'm already using Ubuntu to develop the game so that I have more user memory available than I'd have with Windows. Hence, the laptop will burn-to-death on running 4 emulator instances. I don't have access to any android device, neither I have another machine with higher configuration. And I still have to develop and test this game. P.S. : I'm a CS student, and currently don't work anywhere, and this game is college project, so if there are any paid solutions, I cannot afford it. What can I do to test the app seamlessly? ability to test even only 4 clients (i.e. only 1 session) would suffice, its alright if I can't simulate real environment with some 10-20 active game sessions (having 4 players each).

    Read the article

  • different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • External USB Drive

    - by ErocM
    I have a server that I hooked up an external USB drive. It was formatted in windows and has files on it already. I'm new to Ubuntu so please be patient... I have two questions: Will Ubuntu see the drive since it was formatted in windows? How do I mount this drive or rather, how do I know it's seen by Ubuntu? Thanks! I did fdisk -l and this is what I have but I don't see it. It's a 1tb drive: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001eb47 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 625141759 312320001 5 Extended /dev/sda5 501760 625141759 312320000 8e Linux LVM Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/mapper/ubuntu-root: 316.6 GB, 316577677312 bytes 255 heads, 63 sectors/track, 38488 cylinders, total 618315776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu-swap_1: 3217 MB, 3217031168 bytes 255 heads, 63 sectors/track, 391 cylinders, total 6283264 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 this is an external USB hard drive not a thumb drive :)

    Read the article

  • Azure Florida Association

    - by Dave Noderer
    Herve Roggero, SQL Azure MVP,  has created a virtual community to focus on Azure. Here is the outline from Herve:   User Group Name:  Azure Florida Association Purpose: Start a virtual Florida user group that targets the Azure platform Venues: Most meetings will be virtual; however I plan to host a few physical events across Florida if possible from time to time; physical events may be a few hours long with potentially more than one speaker Possible Topics: The topics will touch Azure generally speaking, but can have a wide array of concern such as Integration, Data Migration, Hosting, Security, Scalability, Mobile Device integration, successful ventures/lessons learned, cross cloud integration patterns, testing in the cloud, deployment management, reporting… Target Members: Architects, Developers, IT Managers Membership: Membership will be free; virtual events will be free; physical events may involve a minimal cover charge Speakers: If you are interested in speaking or if you have topic ideas, please let me know Frequency: Initially these meetings will be held every other month   The first meeting will be held on January 25, 2012 at 4PM EST. Vikas Sahni, SQL Azure MVP, will be presenting on Demystifying SQL Azure. Vikas will introduce SQL Azure, value proposition, usage scenarios, concepts and architecture, what is there and what is not, including Tips and Tricks.  The actual meeting link will be available in January but please join the linked in group now to be kept informed of this and future events: http://www.linkedin.com/groups?gid=4177626.

    Read the article

  • 2D graphics - why use spritesheets?

    - by Columbo
    I have seen many examples of how to render sprites from a spritesheet but I havent grasped why it is the most common way of dealing with sprites in 2d games. I have started out with 2d sprite rendering in the few demo applications I've made by dealing with each animation frame for any given sprite type as its own texture - and this collection of textures is stored in a dictionary. This seems to work for me, and suits my workflow pretty well, as I tend to make my animations as gif/mng files and then extract the frames to individual pngs. Is there a noticeable performance advantage to rendering from a single sheet rather than from individual textures? With modern hardware that is capable of drawing millions of polygons to the screen a hundred times a second, does it even matter for my 2d games which just deal with a few dozen 50x100px rectangles? The implementation details of loading a texture into graphics memory and displaying it in XNA seems pretty abstracted. All I know is that textures are bound to the graphics device when they are loaded, then during the game loop, the textures get rendered in batches. So it's not clear to me whether my choice affects performance. I suspect that there are some very good reasons most 2d game developers seem to be using them, I just don't understand why.

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-13

    - by Bob Rhubart
    TGIF! Mobile Commerce and Engagement Stats | @digbymobile www.digby.com Solution architects take note: mobile is shaping your future. OTN Architect Day - Reston, VA - May 16 www.oracle.com The live one-day event in Reston, VA brings together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s solution architects regularly face. Registration is free, but seating is limited. BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations | Steven Chan blogs.oracle.com A load of links and useful information from Steven Chan. OTN: There's an App for That blogs.oracle.com Get your OTN developer community content on the go with this free app for your mobile device. Five Best Practices for Going Mobile | John Brunswick blogs.oracle.com John Brunswick offers some strategic considerations for delivering products, services, and information to mobile constituents. Why My Slime Mold is Better than Your Hadoop Cluster | Todd Hoff highscalability.com What architects can learn from naturally occurring, self-propelled goop. ADF version of "Modern" dialog windows | Martin Deh blogs.oracle.com Martin Deh describes how to use OOTB ADF components and CSS3 style elements to create iOS-style UI elements. Perfect fit: The cloud and SOA -- but don't call it that | David Linthicum www.infoworld.com "The fact of the matter," says David Linthicum, "is that the best and most effective way to move to the cloud for an enterprise whose technology platforms reflect decades of enterprise IT neglect is to use SOA as an approach and process. Just don't call it 'SOA.'" Thought for the Day "There are two major products that come out of Berkeley: LSD and UNIX. We don't believe this to be a coincidence." — Jeremy S. Anderson

    Read the article

  • Black screen appears when booting new install of 11.10 on my desktop, cannot access Grub menu to fix

    - by Cee
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • Keeping Aspect Screen Ration While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • Cannot run update due to a dpkg error with burg-theme-minimal-sir

    - by boywithaxe
    I cannot run an update or indeed run $: apt-get remove due to a dpkg error with a package that's a part of super-boot-manager. Running an update returns: dpkg: error processing burg-theme-minimal-sir (--configure): subprocess installed post-installation script returned error exit status 1 I tried removing this package alone, with the same error, also trying to remove super-boot-manager returns: (Reading database ... 225474 files and directories currently installed.) Removing burg-theme-minimal-sir ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing burg-theme-minimal-sir (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing super-boot-manager ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: burg-theme-minimal-sir E: Sub-process /usr/bin/dpkg returned an error code (1) I'm sort of stuck now and Google has failed me. Has anyone encountered this problem before? Or does anyone know a way for fixing this?

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

  • Record 8 separate Line IN Channels from M-Audio Delta 1010 Card

    - by Peter Hoffmann
    I want to record the 8 separate Line IN Channels from my M-Audio Delta 1010 Card. The card is recogniced nicely and a can record a single channel via arecord -d 10 -f cd -t wav -D channel1 out2.wav. I've set up the different channels in ~/.asoundrc. Now if I want to record a second channel in parallel (arecord -d 10 -f cd -t wav -D channel2 out2.wav) I get the error arecord: main:564: audio open error: Device or resource busy As I understand the delta 1010 is a single Access Card, so only one application can access it at a time. Is this correct? The next step was to configure a dual channel input in .asoundrc # envy24 channel 1+2 only pcm.test { type plug ttable.0.0 1 ttable.0.1 1 slave.pcm ice1712 } Which works ok when I do a arecord -d 10 -f cd -t wav -D test -c 2 out.wav (BTW can anyone point me to a tool to split a multi channel wav into a file per channel?) But when I want to record the channels separately with (-I option) arecord -d 10 -f cd -t wav -D test -c 2 -I channel1.wav channel2.wav I get no recordings. Did I miss something with the configuration or what are my options to record all 8 channels via arecord. I've no experience with jackd. Is it an option to install jackd and record the line ins via jackd?

    Read the article

  • How do I connect to my running VM via virsh?

    - by Avery Chan
    My VM has already been started via virsh start chameleon.ootbdev. When I do a virsh console chameleon.ootbdev I get the following output: Connected to domain chameleon.ootbdev Escape character is ^] error: internal error cannot find character device (null) Doing a google search on this led me to this "solution". Unfortunately, editing the domain via virsh edit chameleon.ootbdev doesn't seem to stick. I suspect the issue is that I'm inserting the XML incorrectly: the instructions from the link ask me to insert the following XML into the domain XML file. <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> I've posted my domain XML file to pastebin here. This is AFTER I've tried to insert the above XML. I inserted this XML after the </devices> block. My primary question is: How do I connect to the running VM? A secondary question would be: How do I edit the domain file with the above XML and get the changes to stick?

    Read the article

  • Wireless not working on desktop with Asus USB-N13 (B1) wireless adapter

    - by user900749
    I am trying to connect my desktop to a wireless network. I have purchased an ASUS USB-N13 B1 adapter. I have followed instructions for installing drivers and disabling conflicting drivers. I have thoroughly searched and could not find a solution. The adapter is recognized and powered on. I have entered the ssid and password information into the wireless network configuration. Other machines can connect to this wireless network, and the machine can connect online via ethernet without issue. Here is the output of some commands which summarize my configuration, and might give some clues : ~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS" ~$ uname -a Linux petra 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ~$ lsusb Bus 001 Device 006: ID 0b05:17ab ASUSTek Computer, Inc. ~$ dmesg [ 1883.823150] wlan0: authenticate with 48:5b:39:e7:25:5e (try 1) [ 1884.020027] wlan0: authenticate with 48:5b:39:e7:25:5e (try 2) [ 1884.220025] wlan0: authenticate with 48:5b:39:e7:25:5e (try 3) [ 1884.420023] wlan0: authentication with 48:5b:39:e7:25:5e timed out Any assistance would be appreciated as I have been trying to get this machine online for several weeks now to no avail. Sincerely, Michael.

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Netbook performs hard shutdown without warning on low battery power

    - by Steve Kroon
    My Asus EEE netbook performs a hard shutdown when it reaches low battery power, without giving any warning - i.e. the power just goes off, without any shutdown process. I can't find anything in the syslog, and no error messages are printed before it happens. I've had this problem on previous (K)Ubuntu versions, and hoped updating to Ubuntu Precise would help resolve the issue, but it hasn't. The option in the Power application for "when power is critically low" is currently blank - the only options are a (grayed-out) hibernate and "Power off". I have re-installed indicator-power to no effect. The time remaining reported by acpi is unstable, as is the time remaining reported by gnome-power-statistics. (For example, running acpi twice in succession, I got 2h16min, and then 3h21min remaining. These sorts of jumps in the remaining time are also in the gnome-power-statistics graphs.) It might be possible to write a script to give me advance warning (as per @RanRag's comment below), but I would prefer to isolate why I don't get a critical battery notification from the system before this happens, so that I can take action as appropriate (suspend/shutdown/plug in power) when I get a notification. Some additional information on the battery: kroon@minia:~$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0 vendor: ASUS model: 1005P power supply: yes updated: Fri Aug 17 07:31:23 2012 (9 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 33.966 Wh energy-empty: 0 Wh energy-full: 34.9272 Wh energy-full-design: 47.52 Wh energy-rate: 3.7692 W voltage: 12.61 V time to full: 15.3 minutes percentage: 97.248% capacity: 73.5% technology: lithium-ion History (charge): 1345181483 97.248 charging 1345181453 97.155 charging 1345181423 97.062 charging 1345181393 96.970 charging History (rate): 1345181483 3.769 charging 1345181453 3.899 charging 1345181423 4.061 charging 1345181393 4.201 charging kroon@minia:~$ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charging present rate: 332 mA remaining capacity: 3149 mAh present voltage: 12612 mV kroon@minia:~$ cat /proc/acpi/battery/BAT0/info present: yes design capacity: 4400 mAh last full capacity: 3209 mAh battery technology: rechargeable design voltage: 10800 mV design capacity warning: 10 mAh design capacity low: 5 mAh cycle count: 0 capacity granularity 1: 44 mAh capacity granularity 2: 44 mAh model number: 1005P serial number: battery type: LION OEM info: ASUS

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • How To Access Your eBook Collection Anywhere in the World

    - by Jason Fitzpatrick
    If you have an eBook reader it’s likely you already have a collection of eBooks you sync to your reader from your home computer. What if you’re away from home or not sitting at your computer? Learn how to download books from your personal collection anywhere in the world (or just from your backyard). You have an eBook reader, you have an eBook collection, and when you remember to sync your books to the collection on your computer everything is rosy. What about when you forget or when the syncing process for your device is a bit of a hassle? (We’re looking at you, iPad.) Today we’re going to show you how to download eBooks to your eBook reader from anywhere in the world using a cross-platform solution Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0 A Look Back at 2010 Through Infographics Monitor the Weather with the Weather Forecast Extension for Opera Orbiting at the Edge of the Atmosphere Wallpaper

    Read the article

  • Hardware settings reviewing

    - by dino99
    Get some hardware related errors logged into dmesg: oem@dub:~$ dmesg | grep ata10 [ 1.007989] ata10: PATA max UDMA/133 cmd 0xa800 ctl 0xa480 bmdma 0xa408 irq 18 [ 1.691664] ata10: prereset failed (errno=-19) [ 1.691670] ata10: reset failed, giving up oem@dub:~$ dmesg | grep ata2 [ 0.990290] ata2: SATA max UDMA/133 abar m1024@0xfebfb800 port 0xfebfb980 irq 45 [ 1.688011] ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.688055] ata2.00: unsupported device, disabling [ 1.688057] ata2.00: disabled As I understand, its related to my old Seagate SATA HDD, and the PATA CDROM. These errors are quite new, so I feel that their settings (dma, write-cache, ...) have been modified by some upgrades. I've already used hdparm to set write-cache off on the HDDs. But it seems like I need to review some other setting(s) too. With oldest distro it was easy to know about the hardware settings, but now on Quantal/Precise its deeply hidden for the average user. So i would like to know how to view/modify these settings. About the CDROM reader, the problem is different: - the system don't identify it with an UUID; but only with ATAPI or by-id oem@dub:~$ dmesg|grep 'ATAPI' [ 1.308611] ata3.00: ATAPI: TSSTcorp CDDVDW SH-S203D, SB00, max UDMA/100 oem@dub:~$ ls -l /dev/disk/by-id/ ...... lrwxrwxrwx 1 root root 9 oct. 1 06:42 ata-TSSTcorp_CDDVDW_SH-S203D -> ../../sr0 .....

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >