Search Results

Search found 21319 results on 853 pages for 'state management'.

Page 589/853 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • Nested Entities and calculation on leaf entity property - SQL or NoSQL approach

    - by Chandu
    I am working on a hobby project called Menu/Recipe Management. This is how my entities and their relations look like. A Nutrient has properties Code and Value An Ingredient has a collection of Nutrients A Recipe has a Collection of Ingredients and occasionally can have a collection of other recipes A Meal has a Collection of Recipes and Ingredients A Menu has a Collection of Meals The relations can be depicted as In one of the pages, for a selected menu I need to display the effective nutrients information calculated based on its constituents (Meals, Recipes, Ingredients and the corresponding nutrients). As of now am using SQL Server to store the data and I am navigating the chain from my C# code, starting from each meal of the menu and then aggregating the nutrient values. I think this is not an efficient way as this calculation is being done every time the page is requested and the constituents change occasionally. I was thinking about a having a background service that maintains a table called MenuNutrients ({MenuId, NutrientId, Value}) and will populate/update this table with the effective nutrients when any of the component (Meal, Recipe, Ingredient) changes. I feel that a GraphDB would be a good fit for this requirement, but my exposure to NoSQL is limited. I want to know what are the alternative solutions/approaches to this requirement of displaying the nutrients of a given menu. Hope my description of the scenario is clear.

    Read the article

  • eSTEP Newsletter for the technical EMEA partner community

    - by mseika
    We are pleased to present to you the first issue of the eSTEP Newsletter, which is dedicated to support the technical EMEA partner community in the effort to provide more information on what is going on within the corporation, what is the technical news regarding Hardware, events and all the important things which we think may be of interest to you. Invitation: STEP TechCast: Oracle Solaris 11 Express Get an insight on how Oracle Solaris 11 Express has raised the bar on the innovation introduced in Oracle Solaris 10. Learn about the new integrated features such as: network based package management tools improvements to built-in virtualization new virtualised network architecture security enhancements file system evolution  Learn how Oracle Solaris 11 Express provides greatly decreased planned system downtime, performs a completely safe system upgrade, achieves an unprecedented level of flexibility for application consolidation, and provides the highest levels of security in your datacenter. Date and time: Thursday, 7. July 2011, 13:00 - 14:00 CEST Speaker: Joost Pronk van Hoogeveen Target audience: Tech Presales Webcast Coordinates: You will find the coordinates in the eSTEP portal under the Events tab. Use your email-adress and PIN: eSTEP_2011 to get access. We are happy to get your comments and feedback.

    Read the article

  • Updated Virtual Machine for VS/TFS 2010

    - by Enrique Lima
    If you had downloaded the previous version of the virtual machines, then you are likely aware they are set to expire soon (12/15/2010). Brian Keller announced yesterday (blog post here) the availability of a vm refresh (new expiration set for 6/1/2011). What is part of the refresh? Here is the excerpt from Brian’s post: “ The version of this virtual machine which was refreshed on December 9, 2010, includes the following additions: · Visual Studio 2010 Feature Pack 2 · Team Foundation Server 2010 Power Tools (September 2010 Release) · Visual Studio 2010 Productivity Power Tools (these are disabled in VS so that the screenshots of the hands-on-labs still match; you can quickly enable the Productivity Power Tools via Tools -> Extension Manager from within Visual Studio) · Test Scribe for Microsoft Test Manager · Visual Studio Scrum 1.0 Process Template · All Windows Updates through December 8, 2010 · Lab Management GDR (KB983578) · Visual Studio 2010 Feature Pack 2 pre-requisite hotfix (KB2403277) · Microsoft Test Manager hotfix (KB2387011) · Minor fit-and-finish fixes based on customer feedback · A new expiration date of June 1, 2011” The links to download the Virtual Machines are: Hyper-V: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0198b64-4acb-4709-b07f-359fb4d523bc&displaylang=en Windows Virtual PC (Win 7): http://www.microsoft.com/downloads/en/details.aspx?FamilyID=509c3ba1-4efc-42b5-b6d8-0232b2cbb26e&displaylang=en

    Read the article

  • The Home Stretch: NetBeans IDE 7.1 Release Candidate

    - by TinuA
    The first release candidate build of NetBeans IDE 7.1 is live and available for download, which means the big release (GA) is expected any day soon.NetBeans IDE 7.1 delivers support for JavaFX 2.0, enabling the full compile, debug and profile development cycle for JavaFX 2.0 applications and keeping developers in sync with the latest from the Java platform. Beyond JavaFX support, 7.1 also provides significant Swing GUI Builder enhancements, CSS3 support, and visual debugging tools for JavaFX and Swing user interfaces. And Git--a much anticipated featured--has been integrated into the IDE."The entire NetBeans team is tremendously excited about this release, which provides developers with more state-of-the-art tools for building front-end clients," says NetBeans Engineering Director John Jullion-Ceccarelli. "Whether you are doing JavaFX, HTML5, Swing, or JSF, NetBeans 7.1 will let you quickly and easily develop great-looking and full-featured clients for your Java or PHP-based applications."But there's one more task to check off before the general availability: The NetBeans team has launched a Community Acceptance Survey to get user feedback about the release candidate. Download the RC build, test it and take the survey to let the team know if NetBeans IDE 7.1 is ready for its debut!

    Read the article

  • Virtualization in Ubuntu 11.10

    - by Mascarpone
    Since Ubuntu 11.10 use a new kernel, it's very difficult to have a decent support for virtualization. VirtualBox doesn't support guest additions for ubuntu 11.10, so I can't copy to and from my ubuntu desktop and windows, which I absolutely require, plus FreeBSD seems not to be able to use DHCP without guest additions. Virt-manager instead gives an error on launch: Unable to open a connection to the libvirt management daemon. Libvirt URI is: qemu:///system Verify that: - The 'libvirt-bin' package is installed - The 'libvirtd' daemon has been started - You are member of the 'libvirtd' group unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/connection.py", line 1146, in _open_thread self.vmm = self._try_open() File "/usr/share/virt-manager/virtManager/connection.py", line 1130, in _try_open flags) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 102, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirtError: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: Permission denied The problem is solved by running virt-manager as root, but I don't like that. How do I change permissions to run Virt-Manager as user? Is there a way to install guest additions on Ubuntu 11.10?

    Read the article

  • LOD in modern games

    - by Firas Assaad
    I'm currently working on my master's thesis about LOD and mesh simplification, and I've been reading many academic papers and articles about the subject. However, I can't find enough information about how LOD is being used in modern games. I know many games use some sort of dynamic LOD for terrain, but what about elsewhere? Level of Detail for 3D Graphics for example points out that discrete LOD (where artists prepare several models in advance) is widely used because of the performance overhead of continuous LOD. That book was published in 2002 however, and I'm wondering if things are different now. There has been some research in performing dynamic LOD using the geometry shader (this paper for example, with its implementation in ShaderX6), would that be used in a modern game? To summarize, my question is about the state of LOD in modern video games, what algorithms are used and why? In particular, is view dependent continuous simplification used or does the runtime overhead make using discrete models with proper blending and impostors a more attractive solution? If discrete models are used, is an algorithm used (e.g. vertex clustering) to generate them offline, do artists manually create the models, or perhaps a combination of both methods is used?

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • Ethernet Connection Unavailable

    - by fabikw
    I'm running Ubuntu Server 12.04 on a laptop with an Intel NIC (driver e1000e). When I connect the ethernet port to the internet (college network, DHCP) it works out of the box. Now I'm trying to connect it to a networked USRP (if you want to know what it is). A friend of mine managed to do this in his laptop (running regular Ubuntu 12.04) just by setting up a new Wired connection in the Network Manager with appropriate addresses. However, when I do the same, no wired connections are available. The output of nmcli -p dev is =========================================== Status of devices =========================================== DEVICE TYPE STATE ------------------------------------------- wlan0 802-11-wireless connected eth0 802-3-ethernet unavailable but the cable is connected to the device and the device is powered up. Any idea how to solve this? UPDATE: After stopping the network-manager service, setting up the connection manually and starting the service again, it now detects the ethernet connection. However, the device still can't receive data and doesn't answer to pings. UPDATE 2: As suggested I tried using a cross over cable but the result is exactly the same. However, I found out that connecting the device to the dock (as opposed to directly to the laptop) works fine. I know that the ethernet port in the laptop works fine, because connecting to the network through it works. Is it possible that the port in the laptop doesn't support Gb Ethernet (because that's what the device requires) but the one in the dock does?

    Read the article

  • Can't connect to Wireless Network - Ubuntu 12.04 LTS & Sabrent A111N USB Dongle

    - by Ohgodwhy
    I've been trying to connect to this network for quite some time. I can't directly connect to the router with a Wire, but can access the Router with other wireless devices without any issues. I had previously tried several other Wifi nic's but none of them would load properly. Today, i went and bought a new (supported) Sabrent A111N USB Dongle, which said explicitly that it works with Linux 2.4 +. I popped the Dongle in, and low-and-behold it immediately said that there were Available Wireless Connections. I selected my connection and tried to connect, but it just loops constantly while saying Wireless Disconnected then attempts to connect again over and over. ifconfig and iwconfig both show my device in a ready and working state. However, iwlist wlan0 scan says that there are no results found. I don't get it... At one point, I could see the CPU in the DHCP client list under the router, but it doesn't fully make the connection (something about a timeout?). Any help would be appreciated. Bus 001 Device 002: ID 0bda:8176 Realtek Semiconductor Corp. RTL8188CUS 802.11n WLAN

    Read the article

  • Glue Records creation

    - by FFrewin
    I need some information on the following issue, as I would like to have it clear on my mind. I have a VPS server. All my sites hosted on this VPS are using as NameServer .gr domain, like ns1.greekdomain.gr & ns2.greekdomain.gr . The .gr domain name is a domain I own with a greek registar. Now, I want to move 2 websites with .co.uk domain names to my VPS. The co.uk domain names are registered with a UK based registar. When I went in the domain management panel, I did changed the nameservers of my domains to my ns.greekdomain.gr ns. However the panel returns an error about invalid nameservers. After digging, I found that my nameservers are not valid because they do not exist as records in the .co.uk registry. And here it starts my big trouble. The .co.uk registart tells me that I have to ask my hosting provider / .gr registar to create a new record to the .uk registry for my nameservers. The .gr registar tells me that my uk registar needs to create a new record for my ns. From Nominet (.co.uk) registry, the one employee tells me that I need to ask my uk registar, the other employee (seemed to not understand what I was asking) told me that they cannot change my nameservers for me, and she told me to contact anyone else (old hosting provider, uk registar, .gr registar) to help me with that. I can't find help from nobody. I try since the last week to transfer my websites to my VPS and I can't. So, the question is who is responsible and who is able to create glue records for my nameservers ?

    Read the article

  • Presentation Plugin for NetBeans IDE 7.2

    - by Geertjan
    I got some excellent help from Mark Stephens, who is from IDR Solutions, which produces JPedal. Using the LGPL version of JPedal, and code provided by Mark, it's now possible to right-click the node that appears in the Presentation Window: ...after which, using a file browser (to locate a file on disk) or a URL (a very simple check is done, the URL must start with "http" and end with "pdf"), you can now open PDF files as images (thanks to conversion from PDF to images done by JPedal) into NetBeans IDE, typically (I imagine) for presentation purposes: Note that you should consider the plugin in "alpha" state. But, despite that, I've had good results. Try it and use the URL below, as a control test (since it works fine for me), which produces the result shown above: http://edu.netbeans.org/contrib/slides/netbeans-platform/presentation-4-actions.pdf  However, for some PDFs, the plugin doesn't work, and I don't know why yet (trying to figure it out with Mark), resulting in this stack trace: java.lang.ArrayIndexOutOfBoundsException: 8 at org.jpedal.objects.acroforms.formData.SwingData.completeField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createDisplayComponentsForPage(Unknown Source) at org.jpedal.PDFtoImageConvertor.convert(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) Here's the location of the plugin, install it into NetBeans IDE 7.2; feedback is very welcome: http://plugins.netbeans.org/plugin/44525

    Read the article

  • Why would one bother marking up properly and semantically?

    - by Madara Uchiha
    Note that I (try) to mark up as semantically as possible because I like they way it looks and feels, but not because I'm aware of any other stunning advantages. The point of my question is to be able to educate others Well, I've seen a lot of articles and tutorials which often state "Let's mark this up in the most semantic possible way". But a strange thought came to me, why? Why would one need (or want) to bother with the specific elements which convey the correct semantic meaning? Specifically, I'm referring to the new HTML5 elements, such as <time>, <output>, or <address>. Especially, if the page "works" (it renders nicely in all browsers). Why would I want to use elements like <time> or <address>, where nothing at all (or at the worst case, a generic <span>) works just as nicely? I'm asking this because I'm seeing a multitude of (very popular) websites (this one included) which does not follow these so-called best practices.

    Read the article

  • Discovery methods

    - by Owen Allen
    In Ops Center, asset discovery is a process in which the software determines what assets exist in your environment. You can't monitor an asset, or do anything to it through Ops Center, until it's discovered. I've seen a couple of questions about how to discover various types of asset, so I thought I'd explain the discovery methods and what they each do. Find Assets - This discovery method searches for service tags on all known networks. Service tags are small files on some hardware and operating systems that provide basic identification info. Once a service tag has been found, you provide credentials to manage the asset. This method can discover assets quickly, but only if the target assets have service tags. Add Assets with discovery profile - This method lets you specify targets by providing IP addresses, IP ranges, or hostnames, as well as the credentials needed to connect to and manage these assets. You can create discovery profiles for any type of asset. Declare asset - This method lets you specify the details of a server, with or without a configured service processor. You can then use Ops Center to install a new operating system or configure the SP. This method works well for new hardware. These methods are all discussed in more detail in the Asset Management chapter of the Feature Reference guide.

    Read the article

  • NINTENDO, EDCON and ALLEGIS GROUP @ Oracle Open World 2012 Conference Session (CON9418): The Business Case for Oracle Exalogic: A Customer Perspective

    - by Sanjeev Sharma
     Are you looking to deliver breakthrough performance for packaged and custom  applications? For many front-office applications such as Oracle WebCenter Sites, Oracle Transportation Management, and Oracle’s ATG and Siebel product families,  improved  performance leads directly to greater revenue or cost savings from the business - a  compelling  proposition. For back-office applications, improved performance has tangible benefits  in terms of  footprint reductions. For all applications, Oracle Exalogic and Oracle Exadata provide an engineered solution that provides shorter time to value and lower operational costs.  Edcon is a leading clothing, footwear and textiles (CFT) retailing group in southern Africa trading through a range of retail formats. The Company has grown from opening it's first store in 1929, to ten retail brands trading in over 1000 stores in South Africa, Botswana, Namibia, Swaziland and Lesotho. Edcon's retail business has, through recent acquisitions, added top stationery and houseware brands as well as general merchandise to its CFT portfolio. Edcon was looking to consolidate their existing middleware components (Weblogic and Oracle SOA) and retail applications (Retek, Siebel and E-Business Suite) on a common platform and turned to Oracle Exalogic. With Oracle Exalogic, Edcon is able to derive significant HW CAPEX savings, improve response-time of core business applications and mitigate operating risk. Hear senior business leaders from Nintendo, Edcon and Allegis Group discuss how the business value of  leveraging Oracle Exalogic at the following conference session at Oracle Open World 2012: Session:  CON9418 - The Business Case for Oracle Exalogic: A Customer PerspectiveDate: Monday, 1 Oct, 2012Time: 1:45 pm - 2:45 pm (PST)Venue: Moscone South (306)

    Read the article

  • Combining pathfinding with global AI objectives

    - by V_Programmer
    I'm making a turn-based strategy game using Java and LibGDX. Now I want to code the AI. I haven't written the AI code yet. I've simply designed it. The AI will have two components, one focused in tactics and resource management (create troops, determine who have strategical advantage, detect important objectives, etc) and a individual component, focused in assign the work to each unit, examine its possibilites and move the unit. Now I'm facing an important problem. The map where the action take place is a grid-based map. Each terrain has different movement cost. I read about pathfinding and I think A* is a very good option to determine a good route between two points. However, imagine I have an unit with movement = 5 (i.e, it can move 5 tiles of movement cost = 1). My tactical AI has found an objective at a distance d = 20 tiles (Manhattan distance) from my unit. My problem is the following: the unit won't be able to reach the objective in one turn. So the AI will have to store a list of position and execute them in various turns. I don't know how to solve this. PS. In my unit code, I have a list called "selectionMarks" which stores all the possible places where the unit can go in this turn. This places are calculed recursively using a "getSelectionMarks" function. Any help is appreciated :D

    Read the article

  • Why do I get disk I/O errors booting the 3.2 kernel on a xen vps server?

    - by Doug
    I have a xen vps, which I just upgraded to the new LTS 12 Precise Pangolin. However, I see this error on booting: [ 12.848076] end_request: I/O error, dev xvda, sector 12841 [ 12.848093] end_request: I/O error, dev xvda, sector 12841 [ 12.848103] Buffer I/O error on device xvda1, logical block 1605 [ 12.848110] lost page write due to I/O error on xvda1 [ 12.848129] Aborting journal on device xvda1. Results in / being mounted read-only. Reboot: [ 3.087257] EXT3-fs (xvda1): warning: ext3_clear_journal_err: Marking fs in need of filesystem check. [ 3.087677] EXT3-fs (xvda1): recovery complete [ 3.088514] EXT3-fs (xvda1): mounted filesystem with ordered data mode Begin: Running /scripts/local-bottom ... done. done. Begin: Running /scripts/init-bottom ... done. fsck from util-linux 2.20.1 PRGMRDISK1 contains a file system with errors, check forced. Checking disk drives for errors. This may take several minutes. Press C to cancel all checks in progress PRGMRDISK1: ***** REBOOT LINUX ***** PRGMRDISK1: 371152/6001184 files (2.8% non-contiguous), 4727949/12000000 blocks mountall: fsck / [308] terminated with status 3 mountall: System must be rebooted: / [ 151.566949] Restarting system. Name ID Mem VCPUs State Time(s) shadowmint 236 2048 1 --p--- 0.0 Reboot - back to 1. This is definitely an issue with the 3.2 kernel, because booting the 3.0.0 or 2.6.38 kernel series make this issue magically disappear. I'm certain this is some kind of weird xen thing, but no idea. Anyone? Anyhow, until this is resolve I strongly recommend against upgrading if you're running a xen server.

    Read the article

  • Java Components Landing Page and Documentation Updates

    - by joni g.
    The new Java Components page provides access to the documentation for tools that are available for monitoring, managing, and testing Java applications. Documentation for the new versions of the following tools is available: JavaTest Harness 4.6. The JavaTest harness is a general purpose, fully-featured, flexible, and configurable test harness that is suited for most types of unit testing. See the JavaTest tab for documentation. SigTest 3.1. SigTest is a collection of tools that can be used to compare APIs and to measure the test coverage of an API. See the SigTest tab for documentation. The following tools are part of Oracle Java SE Advanced and Oracle Java SE Suite. Java Mission Control and Java Flight Control 5.4 are supported in JDK 8u20. Java Flight Recorder and Java Mission Control together create a complete tool chain to continuously collect low level and detailed runtime information enabling after-the-fact incident analysis. See the JMC tab for documentation. Advanced Management Console 1.0 is a new tool that is now available. AMC can be used to view information about the Java applets and Java Web Start applications running in your enterprise, and create deployment rules and rule sets to manage the execution of these applications. See the AMC tab for documentation. Usage Tracker tracks how Java Runtime Environments (JREs) are being used in your systems. See the Usage Tracker tab for documentation.

    Read the article

  • Accessing second hard drive

    - by Jonathan
    So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris @djeykib So very close to fixing it.. unfortunately on the last command you gave it says this: $ sudo apt-get install linux-lts-backport-natty Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-lts-backport-natty Checking on http://www.ubuntuupdates.org/ppas reveals that it is only available for 10.04. Looks like I'll have to unplug and re-plug hardware if I want it working still :(

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • Develop in trunk and then branch off, or in release branch and then merge back?

    - by Torben Gundtofte-Bruun
    Say that we've decided on following a "release-based" branching strategy, so we'll have a branch for each release, and we can add maintenance updates as sub-branches from those. Does it matter whether we: develop and stabilize a new release in the trunk and then "save" that state in a new release branch; or first create that release branch and only merge into the trunk when the branch is stable? I find the former to be easier to deal with (less merging necessary), especially when we don't develop on multiple upcoming releases at the same time. Under normal circumstances we would all be working on the trunk, and only work on released branches if there are bugs to fix. What is the trunk actually used for in the latter approach? It seems to be almost obsolete, because I could create a future release branch based on the most recent released branch rather than from the trunk. Details based on comment below: Our product consists of a base platform and a number of modules on top; each is developed and even distributed separately from each other. Most team members work on several of these areas, so there's partial overlap between people. We generally work only on 1 future release and not at all on existing releases. One or two might work on a bugfix for an existing release for short periods of time. Our work isn't compiled and it's a mix of Unix shell scripts, XML configuration files, SQL packages, and more -- so there's no way to have push-button builds that can be tested. That's done manually, which is a bit laborious. A release cycle is typically half a year or more for the base platform; often 1 month for the modules.

    Read the article

  • Fusion Applications Announces the Online Feature Query Tool

    - by Richard Lefebvre
    Fusion Applications Development is pleased to announce the availability of a new, online tool for viewing Fusion Features. Oracle Product Features allows you to view new features in Fusion across multiple releases, families and products. You can view online or download the data in a variety of formats, including pdf and xls. This easy-to-use tool covers the same content and therefore replaces the pdf versions of the What's New documents. You can access Oracle Product Features from the Fusion Learning Center under Featured Assets > Product Features Query Tool. It can also be found under Release Readiness > Release Overview. Oracle Product Features will be available to customers and Partners from MyOracleSupport, oracle.com and the Partner Network Fusion Learning Center in the near future.  Oracle Product Features provides you with a high level of flexibility allowing you to only see the content you want, whether it is for a single Fusion product family, such as Human Capital Management (HCM), across several releases, or to view the entire listing of new features.  Content currently incorporates all new features introduced in Releases 3, 4 and 5. Release 6 content will be added in the near future.

    Read the article

  • Java Magazine: Growing on Open

    - by Tori Wieldt
    The November/December issue of Java Magazine is now out, with several great Java stories, including: Growing on Open AgroSense provides an all-Java open source platform for sustainable farming and precision agriculture. An Engine for Big Data Hadoop uses Java for large-scale analytics. JavaFX in SpringStephen Chin shows you why to use the Spring framework on the client. JCP Executive Q&A: Mike MilinkovichThe Eclipse Foundation’s executive director assesses the state of Java and the JCP. Exploring Lambda Expressions for the Java Language and the JVMBen Evans, Martijn Verburg, and Trisha Gee help you get ready for lambda expressions in Java SE 8. Get Started with Java SE for Embedded Devices on Raspberry PiWe walk you through getting Linux and Java SE for Embedded Devices to run on the Raspberry Pi in less than an hour. Java NationGet the news from JavaOne 2012 in San Francisco. Java Magazine is a bi-monthly online publication. It includes technical articles on the Java language and platform; Java innovations and innovators; JUG and JCP news; Java events; links to online Java communities; and videos and multimedia demos. Subscriptions are free. Do you have feedback about Java Magazine? Send a tweet to @oraclejavamag.

    Read the article

  • Five years old Ubuntu system - dist-upgrades always went fine, however some tasks remain

    - by knb
    I have a PC with a current Ubuntu distribution installed. I've upgraded many times since 5.10. It always went well, however some tools or features were kind of left behind in a unsatisfactory state: grub to grub2 - is it an really necessary to switch the boot loader some time to grub2. Upgrading this scares me abit. I still have ext3 devices - is it worth upgrading to ext4? should I wait for btrfs? hibernation and suspend- it only worked in 5.10, since 6.04 it was messed up. Should I really care? Any chance to repair this myself? Simply by cleanup or hacking config files. It is a desktop PC after all. So energy saving functionality is not really needed. I am using vmware workstation 6.5 and the latest kernel that supports it is 2.6.32. This is my default kernel now, ignoring 2.6.35. Am I missing anything important in the new kernel now?

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >