Search Results

Search found 6103 results on 245 pages for 'logical tree'.

Page 45/245 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • How can I generate an Expression tree that queries an object with List<T> as a property?

    - by David Robbins
    Forgive my clumsy explanation, but I have a class that contains a List: public class Document { public int OwnerId { get; set; } public List<User> Users { get; set; } public Document() { } } public class User { public string UserName { get; set; } public string Department { get; set; } } Currently I use PredicateBuilder to perform dynmica queries on my objects. How can I turn the following LINQ statement into an Expression Tree: var predicate= PredicateBuilder.True<User>(); predicate= predicate.And<User>(user => user.Deparment == "HR"); var deptDocs = documents.AsQueryable() .Where(doc => doc.Users .AsQueryable().Count(predicate) > 0) .ToList(); In other words var deptDocs = documents.HasUserAttributes("Department", "HR").ToList();

    Read the article

  • Can GIT, Mercurial, SVN, or other version control tools work well when project tree has binary files

    - by Jian Lin
    Sometimes our project tree can have binary files, such as jpg, png, doc, xls, or pdf. Can GIT, Mercurial, SVN, or other tools do a good job when only part of a binary file is changed? For example, if the spec is written in .doc and it is part of the repository, then if it is 4MB, and edited 100 times but just for 1 or 2 lines, and checked in 100 times during the year, then it is 400MB. If it is 100 different .doc and .xls files, then it is 40GB... not a size that is easy to manage. I have tried GIT and Mercurial and see that they both seem to add a big size of data even when 1 line is changed in a .doc or .pdf. Is there other way inside of GIT or Mercurial or SVN that can do the job?

    Read the article

  • How to display a tree of objects in a JTree?

    - by paul
    Imagine a collection of objects such as World, Country, Region and City. World contains a list of Country objects, Country contains a list of Region objects etc. I would like to represent this structure in a JTree and be able to add, remove and move objects around the tree. Can I easily create a TableModel from this structure? World would be the root object and I would need to perform some object-specific rendering. Any one know of an appropriate tutorial?

    Read the article

  • Are there any XML Editors with FTP and file-tree browsing combined?

    - by JW
    Are there any (free preferably) XML Editors combined with FTP and file-tree browsing Project wide find+Replace I.e A bit like what Dreamweaver MX is but with fancier XML capabilities /XSLT /XSD Perhaps even DW does this...im still on an older version. I'd like to keep a smooth flow between find-edit-view-upload any ideas? Background: I have converted most of the html files of my legacy site into XML (which match the directory structure of my 'public docs' folder). Part of a step towards turning it into completely dynamic data via MVC /Front Controller Pattern.

    Read the article

  • How to implement a tiered "selection tree" in Swing? (Or: is there an existing implementation?)

    - by Sbodd
    I need a Swing component that will let me display a tree-structured list of items, and allow the user to select or de-select an arbitrary subset of those items, with the ability to select or deselect an entire subtree's worth of components by picking that subtree's parent. (Basically, something similar to the Eclipse "Export JAR file's" dialog (an image of the relevant dialog is here - I basically want the "Select resources to export" component, but for a Swing application.) I know I can do this by creating a custom TreeCellRenderer, a custom TreeCellEditor, and a custom TreeModel - but that seems like an awful lot of work. Are there any good off-the-shelf implementations that I can use? Thanks!

    Read the article

  • Constructor with less arguments from a constructor

    - by mike_hornbeck
    I have Constructor Tree(int a, int b, int c) and second Constructor Tree(int a, int b, int c, String s). How to load second constructor from first just to save writing all the logics ? I thought about something like this but it gives me 'null' object. public Tree(int a, int b, int c){ Tree t1 = new Tree(a, b, c, "randomString"); }

    Read the article

  • GitHub Api: How to get Root :tree_sha of a repository?

    - by Chris Jacob
    How do I get the Root :tree_sha of a GitHub repository via the GitHub API? The GitHib API help pages don't seem to explain this critical piece of information: http://develop.github.com/p/object.html Can get the contents of a tree by tree SHA tree/show/:user/:repo/:tree_sha To get a listing of the root tree for the facebox project from our commit listing, we can call this: $ curl http://github.com/api/v2/yaml/tree/show/defunkt/facebox/a47803c9ba26213ff194f042ab686a7749b17476

    Read the article

  • Convert Subclass to Inherited Class

    - by Dave
    I have a C# .NET 2.0 project A that has a form (TreeForm) that uses Tree objects. I have a project B that has a class Oak that inherits Tree. When I try to reference and use the TreeForm in project B with my Oak objects, it's looking for objects of type Tree. I've tried casting the Oak objects to Tree, but I get the error "Cannot convert type Oak to Tree". How can I use my Oak objects in the TreeForm from project A?

    Read the article

  • [C#] Problems with implementing generic IEnumerator and IComparable

    - by r0h
    Hi all! I'm working on an AVL Tree. The tree itself seems to be working but I need a iterator to walk through the values of the tree. Therefore I tried to implement the IEnumerator interace. Unfortunately I get a compile time error implementing IEnumerator and IComparable. First the code and below that the error. class AvlTreePreOrderEnumerator<T> : IEnumerator<T> where T :IComparable<T> { private AvlTreeNode<T> current = default(T); private AvlTreeNode<T> tree = null; private Queue<AvlTreeNode<T>> traverseQueue = null; public AvlTreePreOrderEnumerator(AvlTreeNode<T> tree) { this.tree = tree; //Build queue traverseQueue = new Queue<AvlTreeNode<T>>(); visitNode(this.tree.Root); } private void visitNode(AvlTreeNode<T> node) { if (node == null) return; else { traverseQueue.Enqueue(node); visitNode(node.LeftChild); visitNode(node.RightChild); } } public T Current { get { return current.Value; } } object IEnumerator.Current { get { return Current; } } public void Dispose() { current = null; tree = null; } public void Reset() { current = null; } public bool MoveNext() { if (traverseQueue.Count > 0) current = traverseQueue.Dequeue(); else current = null; return (current != null); } } The error given by VS2008: Error 1 The type 'T' cannot be used as type parameter 'T' in the generic type or method 'Opdr2_AvlTreeTest_Final.AvlTreeNode'. There is no boxing conversion or type parameter conversion from 'T' to 'System.IComparable'. For now I've not included the tree and node logic. I anybody thinks is necessary to resolve this probleem, just say so! Thx!

    Read the article

  • SQL Server 2008 uses half the CPU’s

    - by ACALVETT
    I recently got my hands on a couple of 4 socket servers with Intel E7-4870's (10 cores per cpu) and with hyper threading enabled that gave me 80 logical CPU's. The server has Windows 2008 R2 SP1 along with SQL 2008 (Currently we can not deploy SQL 2008 R2 for the application being hosted). When SQL Server started I noticed only 2 NUMA nodes were configured and 40 logical cores where there should have been 4 NUMA nodes and 80 logical cores (see below). The problem is caused by that fact that...(read more)

    Read the article

  • SQL Server Split() Function

    - by HighAltitudeCoder
    Title goes here   Ever wanted a dbo.Split() function, but not had the time to debug it completely?  Let me guess - you are probably working on a stored procedure with 50 or more parameters; two or three of them are parameters of differing types, while the other 47 or so all of the same type (id1, id2, id3, id4, id5...).  Worse, you've found several other similar stored procedures with the ONLY DIFFERENCE being the number of like parameters taped to the end of the parameter list. If this is the situation you find yourself in now, you may be wondering, "why am I working with three different copies of what is basically the same stored procedure, and why am I having to maintain changes in three different places?  Can't I have one stored procedure that accomplishes the job of all three? My answer to you: YES!  Here is the Split() function I've created.    /******************************************************************************                                       Split.sql   ******************************************************************************/ /******************************************************************************   Split a delimited string into sub-components and return them as a table.   Parameter 1: Input string which is to be split into parts. Parameter 2: Delimiter which determines the split points in input string. Works with space or spaces as delimiter. Split() is apostrophe-safe.   SYNTAX: SELECT * FROM Split('Dvorak,Debussy,Chopin,Holst', ',') SELECT * FROM Split('Denver|Seattle|San Diego|New York', '|') SELECT * FROM Split('Denver is the super-awesomest city of them all.', ' ')   ******************************************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE xtype = 'TF'       AND name = 'Split'       ) BEGIN       DROP FUNCTION Split END GO   CREATE FUNCTION Split (       @InputString                  VARCHAR(8000),       @Delimiter                    VARCHAR(50) )   RETURNS @Items TABLE (       Item                          VARCHAR(8000) )   AS BEGIN       IF @Delimiter = ' '       BEGIN             SET @Delimiter = ','             SET @InputString = REPLACE(@InputString, ' ', @Delimiter)       END         IF (@Delimiter IS NULL OR @Delimiter = '')             SET @Delimiter = ','   --INSERT INTO @Items VALUES (@Delimiter) -- Diagnostic --INSERT INTO @Items VALUES (@InputString) -- Diagnostic         DECLARE @Item                 VARCHAR(8000)       DECLARE @ItemList       VARCHAR(8000)       DECLARE @DelimIndex     INT         SET @ItemList = @InputString       SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       WHILE (@DelimIndex != 0)       BEGIN             SET @Item = SUBSTRING(@ItemList, 0, @DelimIndex)             INSERT INTO @Items VALUES (@Item)               -- Set @ItemList = @ItemList minus one less item             SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)             SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)       END -- End WHILE         IF @Item IS NOT NULL -- At least one delimiter was encountered in @InputString       BEGIN             SET @Item = @ItemList             INSERT INTO @Items VALUES (@Item)       END         -- No delimiters were encountered in @InputString, so just return @InputString       ELSE INSERT INTO @Items VALUES (@InputString)         RETURN   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON Split TO UserRole1 --GRANT SELECT ON Split TO UserRole2 --GO   The syntax is basically as follows: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C AND TABLE2.Id IN (SELECT * FROM Split(@IdList, ',')) @IdList is a parameter passed into the stored procedure, and the comma (',') is the delimiter you have chosen to split the parameter list on. You can also use it like this: SELECT <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C HAVING COUNT(SELECT * FROM Split(@IdList, ',') Similarly, it can be used in other aggregate functions at run-time: SELECT MIN(SELECT * FROM Split(@IdList, ','), <fields> FROM Table 1 JOIN Table 2 ON ... JOIN Table 3 ON ... WHERE LOGICAL CONDITION A AND LOGICAL CONDITION B AND LOGICAL CONDITION C GROUP BY <fields> Now that I've (hopefully effectively) explained the benefits to using this function and implementing it in one or more of your database objects, let me warn you of a caveat that you are likely to encounter.  You may have a team member who waits until the right moment to ask you a pointed question: "Doesn't this function just do the same thing as using the IN function?  Why didn't you just use that instead?  In other words, why bother with this function?" What's happening is, one or more team members has failed to understand the reason for implementing this kind of function in the first place.  (Note: this is THE MOST IMPORTANT ASPECT OF THIS POST). Allow me to outline a few pros to implementing this function, so you may effectively parry this question.  Touche. 1) Code consolidation.  You don't have to maintain what is basically the same code and logic, but with varying numbers of the same parameter in several SQL objects.  I'm not going to go into the cons related to using this function, because the afore mentioned team member is probably more than adept at pointing these out.  Remember, the real positive contribution is ou are decreasing the liklihood that your team fails to update all (x) duplicate copies of what are basically the same stored procedure, and so on...  This is the classic downside to duplicate code.  It is a virus, and you should kill it. You might be better off rejecting your team member's question, and responding with your own: "Would you rather maintain the same logic in multiple different stored procedures, and hope that the team doesn't forget to always update all of them at the same time?".  In his head, he might be thinking "yes, I would like to maintain several different copies of the same stored procedure", although you probably will not get such a direct response.  2) Added flexibility - you can use the Split function elsewhere, and for splitting your data in different ways.  Plus, you can use any kind of delimiter you wish.  How can you know today the ways in which you might want to examine your data tomorrow?  Segue to my next point. 3) Because the function takes a delimiter parameter, you can split the data in any number of ways.  This greatly increases the utility of such a function and enables your team to work with the data in a variety of different ways in the future.  You can split on a single char, symbol, word, or group of words.  You can split on spaces.  (The list goes on... test it out). Finally, you can dynamically define the behavior of a stored procedure (or other SQL object) at run time, through the use of this function.  Rather than have several objects that accomplish almost the same thing, why not have only one instead?

    Read the article

  • External USB Drive

    - by ErocM
    I have a server that I hooked up an external USB drive. It was formatted in windows and has files on it already. I'm new to Ubuntu so please be patient... I have two questions: Will Ubuntu see the drive since it was formatted in windows? How do I mount this drive or rather, how do I know it's seen by Ubuntu? Thanks! I did fdisk -l and this is what I have but I don't see it. It's a 1tb drive: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0001eb47 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 625141759 312320001 5 Extended /dev/sda5 501760 625141759 312320000 8e Linux LVM Disk /dev/sdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/mapper/ubuntu-root: 316.6 GB, 316577677312 bytes 255 heads, 63 sectors/track, 38488 cylinders, total 618315776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu-swap_1: 3217 MB, 3217031168 bytes 255 heads, 63 sectors/track, 391 cylinders, total 6283264 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 this is an external USB hard drive not a thumb drive :)

    Read the article

  • format/build raid 5 with one 4k drive, three 512b

    - by skidawgz
    I have 4 WD 1TB drives which I want to 4x1TB Raid5. I am not sure what course of action to take next. How do I configure my 4th drive (sde) to align with the rest? Will this affect performance? I rcv this msg (which brings me here to ask these question): The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted. fdisk -l shows: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf324ba09 Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x38bcc1f0 Device Boot Start End Blocks Id System /dev/sdc1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x570f77e7 Device Boot Start End Blocks Id System /dev/sdd1 2048 1953525167 976761560 fd Linux raid autodetect Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xeb665e7b Device Boot Start End Blocks Id System

    Read the article

  • How do you kill a process tree in linux?

    - by itsadok
    Sometimes, sending a SIGTERM to a process will cause it to send SIGTERM to all its child processes. However, sometimes this doesn't work. Is there a command or a utility that will allow me to kill a process and all its child processes at the same time? I usually resort to manually collecting all the pids into one kill command, but it feels stupid. This SO question asks how to do this with perl, but anything that gets the job done would be great.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Can't mount data DVD using 12.04

    - by dash2
    I cannot mount a data DVD using Ubuntu 12.04. I can read the DVD using a different OS, so the DVD is not the problem. I can read other CDs using Ubuntu. The error I get is: $ sudo mount /dev/dvd /cdrom mount: no medium found on /dev/sr0 The relevant output from lshw is: *-cdrom description: DVD reader product: DVD/CDRW UJDA775 vendor: MATSHITA physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/sr0 version: CB03 capabilities: removable audio cd-r cd-rw dvd configuration: ansiversion=5 status=nodisc I have installed ubuntu-restricted-extras and run /usr/share/doc/libdvdread4/install-css.sh.

    Read the article

  • How to change the install path of my Linux Source tree?

    - by Sen
    I was trying to bring up my custom kernel. I did the following : make menuconfig && make modules && make modules_install && make install I would like to change the install PATH. How can i do that? I tried doing export INSTALL_PATH=<my custom path> But then it is only creating the vmlinux.bin(it is not creating the ramdisk image!!) But if i am not doing that, make install will automatically create the ramdisk image in the default /boot folder. How can i change that?? Thanks, Sen

    Read the article

  • Can I make Puppet's module-to-file mapping to start searching at the top of the modules tree?

    - by John Siracusa
    Consider these two Puppet module files: # File modules/a/manifests/b/c.pp class a::b::c { include b::c } # File modules/b/manifests/c.pp class b::c { notify { "In b::c": } } It seems that when Puppet hits the include b::c directive in class a::b::c, it searches for the corresponding *.pp file by looking backwards from the current class and decides that it find the correct file located at ../../b/c.pp. In other words, it resolves b::c to the same *.pp file that the include b::c statement appears in: modules/a/manifests/b/c.pp I expected it (and would like it) to instead find and load the file modules/b/manifests/c.pp. Is there a way to make Puppet do this? If not, it seems to me that module names may not contain any other module names anywhere within them, which is a pretty surprising restriction.

    Read the article

  • How can I automatically synchronize a directory tree on multiple machines?

    - by Blacklight Shining
    I have two Mac laptops and a Debian server, each with a directory that I would like to keep in sync between the three. The solution should meet the following criteria (in rough order of importance): It must not use any third-party service (e.g. Dropbox, SugarSync, Google whatever). This does not include installing additional software (as long as it's free). It must not require me to use specific directories or change my way of storing things. (Dropbox does this IIRC) It must work in all directions (changes made on /any/ machine should be pushed to the others) All data sent must be encrypted (I have ssh keypairs set up already) It must work even when not all machines are available (changes should be pushed to a machine when it comes back online) It must work even when the /directories/ on some machines are not available (they may be stored on disk images which will not always be mounted) This can be solved for Macs by using launchd to automatically launch and kill (or in some way change the behavior of) whatever daemon is used for syncing when the images are mounted and unmounted. It must be immediate (using an event-based system, not a periodic one like cron) It must be flexible (if more machines are added, I should be able to incorporate them easily) I also have some preferences that I would like to be fulfilled, but do not have to be: It should notify me somehow if there are conflicts or other errors. It should recognize symbolic and hard links and create corresponding ones. It should allow me to create a list of exceptions (subdirectories which will not be synced at all). It should not require me to set up port forwarding or otherwise reconfigure a network. This can be solved by using an ssh tunnel with reverse port forwarding. If you have a solution that meets some, but not all of the criteria, please contribute it in the comments as it might be useful in some way, and it might be possible to meet some of the criteria separately. What I tried, and why it didn't work: rsync and lsyncd do not support bidirectional synchronization csync2 is designed for server clusters and does not appear to work with machines with dynamic IPs DRBD (suggested by amotzg) involves installing a kernel module and does not appear to work on systems running OS X

    Read the article

  • "The volume filesystem root has only..."

    - by jcslzr
    I am having this problem in ubuntu 12.04, but I fin strange that when I go to /tmp it wont allow me to delete some files, with message "Operation not permitted" or "this file could not be handled because you dont have permissions to read it". It is only a PC and I have the root password. I was trying to get at least 2000 MB of free space on the root file system to upgrade to 12.10 and see if that resolved the problem. Currently free space on root file system is 190 MB. This is my output: root@jcsalazar-Vostro-3550:~# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 7688360 7112824 184984 98% / udev 2009288 4 2009284 1% /dev tmpfs 806636 1024 805612 1% /run none 5120 0 5120 0% /run/lock none 2016584 5316 2011268 1% /run/shm /dev/sda5 472036 255920 191745 58% /boot /dev/sda7 30758848 7085480 22110900 25% /home root@jcsalazar-Vostro-3550:~# sudo parted -l Model: ATA TOSHIBA MK3261GS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 106MB 105MB primary fat16 2 106MB 15.8GB 15.7GB primary ntfs boot 3 15.8GB 278GB 262GB primary ntfs 4 278GB 320GB 41.9GB extended 5 278GB 279GB 499MB logical ext4 6 279GB 287GB 7999MB logical ext4 7 287GB 319GB 32.0GB logical ext4 8 319GB 320GB 1443MB logical linux-swap(v1) I apprecciate any new ideas that can help me. Thnx Carlos

    Read the article

  • Partition does not start on physical sector boundary?

    - by jasmines
    I've one HD on my laptop, with two partitions (one ext3 with Ubuntu 12.04 installed and one swap). fdisk is giving me a Partition 1 does not start on physical sector boundary warning. What is the cause and do I need to fix it? If so, how? This is sudo fdisk -l: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 testine, 63 settori/tracce, 91201 cilindri, totale 1465149168 settori Unità = settori di 1 * 512 = 512 byte Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Identificativo disco: 0x5a25087f Dispositivo Boot Start End Blocks Id System /dev/sda1 * 63 1448577023 724288480+ 83 Linux Partition 1 does not start on physical sector boundary. /dev/sda2 1448577024 1465147391 8285184 82 Linux swap / Solaris This is sudo lshw related result: *-disk description: ATA Disk product: WDC WD7500BPKT-0 vendor: Western Digital physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WX21CC1T0847 size: 698GiB (750GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=5a25087f *-volume:0 description: EXT3 volume vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 logical name: / version: 1.0 serial: cc5c562a-bc59-4a37-b589-805b27b2cbd7 size: 690GiB capacity: 690GiB capabilities: primary bootable journaled extended_attributes large_files recover ext3 ext2 initialized configuration: created=2010-02-27 09:18:28 filesystem=ext3 modified=2012-06-23 18:33:59 mount.fstype=ext3 mount.options=rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered mounted=2012-06-28 00:20:47 state=mounted *-volume:1 description: Linux swap volume physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 version: 1 serial: 16a7fee0-be9e-4e34-9dc3-28f4eeb61bf6 size: 8091MiB capacity: 8091MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 These are related /etc/fstab lines: UUID=cc5c562a-bc59-4a37-b589-805b27b2cbd7 / ext3 errors=remount-ro,user_xattr 0 1 UUID=16a7fee0-be9e-4e34-9dc3-28f4eeb61bf6 none swap sw 0 0

    Read the article

  • Is there any professional way to illustrate difference between 2 diagrams?

    - by lamwaiman1988
    I made a documented which is to describe the difference between the same logical structure from different version ( e.g Structure A from version 8 and Structure A from version 9 ). Luckily I've got the logical structure diagram from the 2 functional specification. I've managed to copy the image of each logical structure and paste them in MS Word and compare the 2 version side by side. I don't know if there is a standard way to illustrate the difference. I simply draw a cross over the removed logical member from the previous version and draw a rectangle around the new logical member of the next version. I know my way is kinda childish. I am wondering how to present them professionally. In addition, You won't believe this, but MS Word doesn't have a shape of CROSS, so I am actually using a multiplication sign that look like a giant monster: This is why I hate myself. Unlike 2 separated lines, this shape is easy to use, draw and resize. I am wondering if MS Word would care about a normal cross.

    Read the article

  • Beaglebone Black running Debian, does device tree overlay act as an api?

    - by user3953989
    This maybe more of a Linux specific question but... I've been reading many tutorials and it seems that you can use JavaScript, Python, and C++ to write code for the Beaglebone Black(BBB). It looks like the way C++ interfaces with the BBB hardware is via reading/writing text files on the OS while Python has it's own library. All the C++ examples out there control the GPIO and PWM via reading/writing to text files. Is this the only way to access the hardware or just how Linux does drivers?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >