Search Results

Search found 12603 results on 505 pages for 'shadow copy'.

Page 157/505 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • empirical studies about the benefit of q&a sites on programming [on hold]

    - by nico1510
    I'm looking for empirical papers which investigate if a user can benefit from q&a sites like Stack Overflow. I welcome any papers related to this topic e.g: an experiment, investigating if a specific task can be executed faster, an analysis, investigating if a user understands the solutions on q&a sites or if he just does copy&paste without thinking about it, a comparative analysis of the code quality of users with access to q&a sites in contrast to users without internet access (but just offline documentation of APIs)

    Read the article

  • Is there a CDN for backbone.marionette?

    - by Thunder Rabbit
    Getting started with Backbone and Marionette, I was about to copy the file at https://github.com/marionettejs/backbone.marionette/blob/master/lib/backbone.marionette.js to my local server, but wondered if there was a CDN version of it. For Underscore and Backbone dev, I'm including these two files, respectively: http://documentcloud.github.com/underscore/underscore-min.js http://documentcloud.github.com/backbone/backbone-min.js Is there a similar URL for backbone.marrionette.js?

    Read the article

  • Visual Studio 2010 Professional now on Dreamspark!

    - by Stacy Vicknair
    If you are a student and you were looking for your VS2010 fix today, be sure to check out Dreamspark.com and get your own copy! Dreamspark is simple; it’s about giving students Microsoft professional tools at no charge. Visit Dreamspark right now to sign up and get VS2010!   Technorati Tags: VS2010,Dreamspark,students,.NET

    Read the article

  • Can't see my iPod

    - by Tom Brito
    When I plug in my 32GB iPod Touch 4G, it mounts a 1GB drive. Rhythmbox does not react, neither does Banshee. Any ideas how to copy my music? The output of df is: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 28834716 4347480 23022512 16% / udev 1026788 288 1026500 1% /dev none 1026788 1496 1025292 1% /dev/shm none 1026788 204 1026584 1% /var/run none 1026788 0 1026788 0% /var/lock none 1026788 0 1026788 0% /lib/init/rw /dev/sda6 96124904 62709456 28532496 69% /home

    Read the article

  • How can I make a universal construction more efficient?

    - by VF1
    A "universal construction" is a wrapper class for a sequential object that enables it to be linearized (a strong consistency condition for concurrent objects). For instance, here's an adapted wait-free construction, in Java, from [1], which presumes the existence of a wait-free queue that satisfies the interface WFQ (which only requires one-time consensus between threads) and assumes a Sequential interface: public interface WFQ<T> // "FIFO" iteration { int enqueue(T t); // returns the sequence number of t Iterable<T> iterateUntil(int max); // iterates until sequence max } public interface Sequential { // Apply an invocation (method + arguments) // and get a response (return value + state) Response apply(Invocation i); } public interface Factory<T> { T generate(); } // generate new default object public interface Universal extends Sequential {} public class SlowUniversal implements Universal { Factory<? extends Sequential> generator; WFQ<Invocation> wfq = new WFQ<Invocation>(); Universal(Factory<? extends Sequential> g) { generator = g; } public Response apply(Invocation i) { int max = wfq.enqueue(i); Sequential s = generator.generate(); for(Invocation invoc : wfq.iterateUntil(max)) s.apply(invoc); return s.apply(i); } } This implementation isn't very satisfying, however, since it presumes determinism of a Sequential and is really slow. I attempted to add memory recycling: public interface WFQD<T> extends WFQ<T> { T dequeue(int n); } // dequeues only when n is the tail, else assists other threads public interface CopyableSequential extends Sequential { CopyableSequential copy(); } public class RecyclingUniversal implements Universal { WFQD<CopyableSequential> wfqd = new WFQD<CopyableSequential>(); Universal(CopyableSequential init) { wfqd.enqueue(init); } public Response apply(Invocation i) { int max = wfqd.enqueue(i); CopyableSequential cs = null; int ctr = max; for(CopyableSequential csq : wfq.iterateUntil(max)) if(--max == 0) cs = csq.copy(); wfqd.dequeue(max); return cs.apply(i); } } Here are my specific questions regarding the extension: Does my implementation create a linearizable multi-threaded version of a CopyableSequential? Is it possible extend memory recycling without extending the interface (perhaps my new methods trivialize the problem)? My implementation only reduces memory when a thread returns, so can this be strengthened? [1] provided an implementation for WFQ<T>, not WFQD<T> - one does exist, though, correct? [1] Herlihy and Shavit, The Art of Multiprocessor Programming.

    Read the article

  • Access files on Samsung Galaxy S3 external sd card using ubuntu 12.04

    - by nense
    I have a Samsung Galaxy s3 running the stock Samsung ROM and I'm trying to transfer files - videos, photos, music and downloads, from my handset to my system via USB running Ubuntu 12.04. I have followed to links suggested How to connect Samsung Galaxy S3 via USB? But it all goes over my head. Can anyone help me with a simple GUI program or a link so I can simply copy and paste selected files from my phone onto my system?

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Automating release management and CI on python projects under mercurial VCS

    - by ms4py
    I have a set of Python projects which are under the mercurial VCS. I would like to automate the following tasks: Run the test suite for every commit (CI). Make a source distribution for every commit, which has a tag in mercurial. This is regarded as a new release. Copy the distribution to a special repository. There is Jenkins as a proposal for similar questions, but I'm not sure if it can handle the release management like intended.

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • Ubuntu 12.04 VPS doesn't boot with mysql in nsswitch.conf

    - by chrisv
    1and1 VPS ("dynamic cloud server") does not boot any more as soon as mysql lookup is enabled in nsswitch.conf - any suggestions appreciated. Minimal setup to reproduce the problem: install Ubuntu 12.04 / LTS minimal server image install mysql-server, libnss-mysql-bg, nscd configure /etc/libnss-mysql.cfg and /etc/libnss-mysql-root.cfg set up appropriate database tables configure nss lookups through mysql in nsswitch.conf passwd: compat mysql group: compat mysql shadow: compat mysql Now, when I try to reboot the server it just hangs. No logs (maybe due to /var not yet being mounted), and I can't see console output (since this is a VPS). Booting into recovery image and removing "mysql" from /etc/nsswitch.conf makes the system bootable again, so this is definitely related to nsswitch/libnss-mysql-bg. There's a thread on gentoo-users which seems to describe a similar problem, unfortunately there's no real solution described, also the thread is rather old (from 2006) so I'm not sure whether this applies to me at all.

    Read the article

  • How to install packages without internet connection

    - by user114874
    I'm just beginner to linux operating system I have following doubts 1.Now i am using ubuntu 10.10 version i dont have net connection in my home So how can i install packages manually for ex: if me and my friend have same version and same hardware config if he installed installed all packages in his laptop can i install all his packages by copy packages from his lap to mine ?? if there is a way then how to do it?? Thx guys in advance..... :)

    Read the article

  • Apress Deal of the Day - 5Mar/2011 - Crafting Digital Media: Audacity, Blender, Drupal, GIMP, Scribus, and other Open Source Tools

    - by TATWORTH
    Today's Apress $10 deal of the day at http://www.apress.com/info/dailydeal has been on before. I have a copy and it is useful read on open source applications for Windows. Crafting Digital Media: Audacity, Blender, Drupal, GIMP, Scribus, and other Open Source Tools Open source software, also known as free software, now offers a creative platform with world-class programs. Crafting Digital Media is your foundation course in photographic manipulation, illustration, animation, making music, video editing, and more using open source software.

    Read the article

  • Why are the man pages for gvfs-commands not in Ubuntu?

    - by Pili Garcia
    These are the man pages for gvfs-cat gvfs-less gvfs-monitor-dir gvfs-move gvfs-rm gvfs-trash gvfs-copy gvfs-ls gvfs-monitor-file gvfs-open gvfs-save gvfs-tree gvfs-info gvfs-mkdir gvfs-mount gvfs-rename gvfs-set-attribute http://www.unix.com/man-pages.php?query=gvfs-info&apropos=0&section=1&os=opensolaris Every year I see that there are more commands without manpage nor apropose description, nothing! Is that the path?

    Read the article

  • Posting over at LIV Interactive

    - by D'Arcy Lussier
    First, no no no, I’m not leaving GWB! What I am going to be doing is contributing my business-focussed posts to a professional community that fellow Winnipegger Coree Francisco created called LIVInteractive! LIVInteractive publishes articles on business, design, development, content (marketing, copy, etc.), and community…and has some fantastic contributors providing new content regularly! Head on over and check the site out…lots of great info to be had! D

    Read the article

  • can't run this shell script

    - by user2413
    So I'm trying to install this script I do copy the folder in ~/Documents/icambridge-get-shit-done-1222b6b change .bashrc (the one in the user directory, is that the right one?) by adding a line PATH=:~/Documents/icambridge-get-shit-done-1222b6b”${PATH}” set the files in icambridge-get-shit-done-1222b6b as execs using sudo chmod +x type sudo ./get-shit-done and i get: /usr/bin/env: php: No such file or directory What is the problem?

    Read the article

  • Using e-mail address as user name for SMTP and POP3

    - by PeterMmm
    I have a exim4 setup as SMTP. My user naming schema is to name all mail users for this server as m001, m002, m003, ... and then redirect to a real e-mail address with virtual domains. How can I allow my users to authenticate with exim to send mail using either their system user name (m001) or the email address ([email protected])? User login information for m001 are stored in linux system files (passwd, shadow). They are linked thru entries in a virtual address table for each domain that this server can serve: # /etc/exim4/virtual/example.com m001: [email protected] m002: [email protected] m003: [email protected] The same can be applied to qpopper ?

    Read the article

  • udev complaining about deprecated rules

    - by Kerrek SB
    I have Ubuntu 10.10, upgraded several times from older versions over many years, and during startup, I get a long list of warning messages from udevd that certain syntax rules are deprecated. (I don't actually have a copy of the messages, since they don't appear to be logged in the ring buffer or any log files.) Does anyone know how I can summarily prune or upgrade the troublesome rule files to the current format?

    Read the article

  • Adding a watermark to MP4 with FFmpeg, compatible with Flash and HTML5 players

    - by ?????? ?????
    How can I add watermark to my MP4 (compatible with Flash player and HTML5 player)? ffmpeg -y -i video.mp4 -acodec copy -b 400k -vf "movie=logo.png [watermark]; [in][watermark] overlay=main_w/2-overlay_w/2:main_h/2-overlay_h/2 [out]" /var/www/videos/out.mp4 The above command works fine in VLC, Windows Media Player, but the HTML5 player can't play out.mp4, Flash player plays only sound. ffmpeg was installed with this command: sudo apt-get install ffmpeg. What am I doing wrong?

    Read the article

  • USB sector 0 not fount Kingston USB DT100 G2

    - by java
    Windows constantly asks me "Foramt Disk". when i go to command prompt and type format H: /fs:ntfs or format H: /fs:fat32 response: Cannot determine the number of sectors on this volume. if the benefit DISKPART detail disk Kingston DT 100 G2 USB Device Disk ID: 00000000 Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : No Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No DISKPART detail volume Read-only : No Hidden : No No Default Drive Letter: No Shadow Copy : No Offline : No BitLocker Encrypted : No Installable : No Volume Capacity : 0 B Volume Free Space : 0 B what the problem?

    Read the article

  • Evolution not downloading messages for offline use

    - by RandomX678
    Even though I've selected "Copy folder content locally for offline operation" on my inbox and "Automatically synchronize account locally" in my account options, Evolution still only downloads headers for my inbox. When I click an email, its downloaded from the server. I want all my emails to be available to read offline - As far as I know this should be possible, so is my configuration that's wrong, or is it a bug?

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • Using Alt-select in SSMS, Word, and elsewhere

    - by John Paul Cook
    A surprising number of database people and Windows users in general don’t know about Alt select . This is a Windows technique not unique to SSMS that allows a user to select an arbitrary rectangular region of text and delete it, cut it, or copy it. Where I find Alt select particularly useful in SSMS is when I have a bunch of inline comments that are too far to the right. I want to delete much of the whitespace in front of them to move them to the left without disturbing any of the rest of the T-SQL....(read more)

    Read the article

  • Why won't this application start when I log in?

    - by George Edison
    I have a file in ~/.config/autostart/ that looks like this: [Desktop Entry] Type=Application Exec=python ~/Documents/StackApplet/stackapplet.py Icon=/usr/share/pixmaps/stackapplet.png Terminal=false Comment=a panel indicator for monitoring StackExchange sites Name=StackApplet Categories=Utility; Unfortunately, it isn't working - the application is not starting when I log in. If I open a terminal and copy-and-paste the command listed in Exec above, then the application runs just fine. What am I doing wrong?

    Read the article

  • Grub does not show a Windows 8 option after dual boot

    - by skytreader
    So, I've successfully dual-booted my Windows 8 machine with Ubuntu 12.04 . However, I still don't have a convenient method of choosing what OS to load at boot time. After installing Ubuntu, my computer still loads Windows 8 directly. I then added grubx64.efi to the white list of my boot loader. But after that, my machine loads Ubuntu directly without even a shadow of GRUB showing up! I used boot-repair and I got this paste.ubuntu URL: paste.ubuntu.com/1326074. After running boot-repair (and re-white listing the grubx64.efi file), GRUB now shows up but without any Windows 8 option! Lastly, I ran sudo fdisk -l and it gave me this: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x6396389f Device Boot Start End Blocks Id System /dev/sda1 1 1465149167 732574583+ ee GPT Partition 1 does not start on physical sector boundary. I'm guessing my problem has something to do with the warning from fdisk above but I don't know what to do with it. How do I proceed now?

    Read the article

  • Google Analytics Views - Why Use Them?

    - by pee2pee
    I've been reading about Google Analytics views but still not sure why I would use them. I'm the only person in the company who understands and uses Google Analytics. We have no subdomains. Is there any reason why I would want to use views? Google Analytics has been going for some years now and I just created a copy of the original view but this has zero data, so I can't see how it would benefit me.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >