Search Results

Search found 5685 results on 228 pages for 'encrypted partition'.

Page 188/228 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Deleting Windows 8 install

    - by Yann4
    I have a windows 7 pc, and I had installed windows 8 on an SSD, so it was the only partition on that SSD. I've now realised that I never use it, so I went to uninstall it and followed these steps. The PC now boots into 7, as expected, however I can't delete or format the drive that has windows 8 on it. The format dropdown selection is greyed out, and I tried to just simply delete all of the files and use the disk as is, but as windows 8 isn't there, I no longer have rights to delete all of the files, apparently. The drive in question has 3 partitions, one that I made for the win8 system, and 2 smaller system-created partitions (one 300MB recovery and 1 100MB EFI) How do I go about formatting the drive?

    Read the article

  • Emacs doesn't use ~/.ssh/config when accessing files on a remote machine

    - by Yotam
    I have a fresh install of arch Linux. I've installed Emacs from the rpos, and my home directory is mounted from a separate partition. I have old settings I've used on my ~/.ssh/config along with authentication keys I've regularly used before. Now, when I try to connect to a remote machine using Emacs, Emacs asks for my password and uses the wrong username. Clearly, Emacs doesn't access my config file. When I try to ssh or scp directly to the machine, things work fine. What do I need to update?

    Read the article

  • how to reduce size (disk space) of windows 8?

    - by humanityANDpeace
    This questions is about what things I can do to reduce the size that Windows 8 uses. Background For example: At present and with only one programm installed (MS Access 2007) I have a about 15GB of my harddisk space used. I have little space (its a 17 GB partition on a SSD disk). I would like solutions that are like: Remove files not really needed (drivers not actually needed in the system) Help files not really needed (documentation) pagefile.sys (assuming I would have 4GB ram and no real need for swaping) hiberfil.sys (used for hibernate and sleep... I need that. though I would regain about 4GB space) At best I would like to delete mostly files that I would most likely not need. Though I have no good idea where to start there. Since my setup (hardware will not change) I would be willing to delete all the drivers that windows 8 has for hardware I do not have.... The question is about ways to reduce the space that Windows 8 uses.

    Read the article

  • PPTP VPN on Server 2008 Enterprise

    - by Mike K
    I asked this question on Server fault and was told that was not allowed so im moving it here. I am running Windows Server 2008 enterprise in my HOME network inside of vmware workstation. I am running this on my home network to setup a PPTP VPN connection at home. I have correctly setup everything I needed to make it work, including opening all the ports, 1723 and 43 (GRE). I am able to connect just fine, but when I connect I dont have internet unless I uncheck use remote gateway. The thing is, I want to use the remote gateway to route all my traffic through that connection. Can someone tell me why this isnt working and how to get it to work. When I have remote gateway checked, and I do an ipconfig I dont get a remote gateway for the VPN connection, its 0.0.0.0 when id assume if connected properly should be 192.168.1.254 (my ATT Home Router). Also, if I cant get the remote gateway issue to work, and I have to uncheck that box to get internet, does this mean my VPN session is no longer encrypted? I am fully aware the PPTP VPN is the weakest VPN encryption out there but still having that extra layer of security when im on an unsecure wifi connection makes me feel a bit better. Thank you for all your help in advance. Someone told me I need to setup a gateway or router configured on the server. If thats the case, how go I go about telling the remote co

    Read the article

  • RAID striping on a desktop machine

    - by Blazemore
    I currently have a 120Gib SSD which is pretty fast for things like game loading times and video editing. However, I was wondering about getting another identical drive and hooking it up with a striping RAID array in hardware (I boot multiple operating systems). This would have the dual benefits of providing a larger logical drive, while also providing greater performance. However, I have a few questions: What kind of performance increase can I expect to see with a pair of good quality SSDs? How expensive is a quality desktop RAID controller? Will the controller present the OS with a single logical drive? Does this mean I can still partition it and multi-boot? Basically, can I treat the RAID controller as "a hard drive" at the OS level?

    Read the article

  • Windows always logs in to temporary profile (thinks it is in D while it is in C)

    - by asdf
    I have Windows on C: Disk 0 Partition 1 When I start it works fine until the login screen. When I log in, it starts to display "preparing your desktop.." and logs in to a temporary profile. I have to run explorer.exe manually then using task manager. If I execute %SystemRoot% it tells me that Windows could not find D:\Windows. (while Windows is in C:) I have no such drive as D then why Windows is thinking it is in D? I've tried this Bootmanager is missing but it did not work. Bootrec /ScanOS from Windows setup gives me Total identified Windows installations: 0 Also note that Windows Setup correctly thinks windows is installed on C but Windows itself thinks it is on D.

    Read the article

  • Windows 7 Extend C Volume to Unallocated Space

    - by user327777
    a while back I installed Ubuntu and then later uninstalled it by I think deleting the partitions and recovering the windows 7 boot loader. I am not that experienced with partitioning yet. As you can see here there are two partitions that are now unallocated. The 9gb one is a recovery or something that came with the computer. How can I extend my C partition to use both of those? I do not want to have that much storage just wasted sitting there. Currently when I right click on C and hit extend the wizard pops up but there is no available space to extend. http://i.imgur.com/VxEkdyR.png http://i.imgur.com/DdFZWX9.png Thanks everyone!

    Read the article

  • Boot custom linux up by pressing Lenovo OKR button?

    - by Semmu
    I have a Lenovo Y510p laptop and I'm a Linux user, use Windows only for gaming. The device had no OS when I bought it and I also installed an SSD besides the 1TB hard drive. I would like to "hack" the One-Key-Recovery button, because I have no interest in its default behaviour (I don't need Windows recovery), but if I could boot up a hidden, fail-safe Linux with it, that would be great. How could I achieve it? I tried to search what the button does, but I only found some installers for Windows that could magically create a partition for the recovery. I would like to override this behaviour completely to boot up something else.

    Read the article

  • How to turn a DSL wireless modem to a wifi hub

    - by my_question
    I used to use DSL for my home internet and used Qwest Q1000 wireless modem. Now I switched to cable and use wireless router to cover the home. One problem is I just bought a desktop and I like to put it in a place far away from the router. The desktop only has cable interface, it does not receive wifi. The obvious solution is I go buy that little USB dongle which can receive wifi and plug it to the desktop. But before doig that, I am wondering if somehow I can re-use the Q1000 modem. The modem has 4 LAN ports and it has wifi antenna. I tried connecting the desktop to Q1000's LAN port, the system shows wire connection is in place, but I cannot access internet. It seems to me Q1000's wifi function is to broadcast the wifi signal out instead of receiving signal. I went to the Q1000 configuration page by going to web page of "192.168.0.1", it is not clear how to set it up. I also wonder one thing, my home wifi is encrypted, so if I want to let Q1000 to join the wifi, I need to somehow type in the password, I am not sure how to do that either. Anyway, maybe this thing cannot be used in this fashion. If you have any suggestion, please shed some light. Thanks.

    Read the article

  • How to fix a Corrupted USB

    - by Help
    My USB stick has suddenly stopped working. It's a Busbi 4GB. My USB used to be G:/ but as soon as I plugged it in, I used to get a pop up box showing that it was plugged in. Now, when I plug this in, it shows as I:/ and no pop up box appears. It shows in my computer as I:/ and when I click to open it says I:/ is not accessible the disk structure is corrupted and unreadable. I have tried to change the file name back to G:/ but nothing happened (this was under disk management). On disk management, it shows Volume as I:/ Layout simple Type Basic File system RAW status Healthy (Active,Primary partition) Capacity 3.42GB. I've tried right clicking properties then the tab tools and click error checking (this option will check the volume for errors). When I click "check now" it comes up with the disk check could not be performed because Windows cannot access the disk.

    Read the article

  • Windows 8: 100% disk active time, no actual data transferred

    - by fingerbangpalateclick
    Occasionally, like several times an hour, my hard drive will appear to lock up: Task Manager will show 100% active time with read and write speeds of 0. I can still switch between open windows, but anything that requires a disk access will stall for around a minute until the hard disk starts working properly again. It happens at apparently random intervals, and only happens in Windows 8. Not 7, nor Linux. It is probably not a problem with the disk itself: This is a relatively new hard drive, and S.M.A.R.T. is showing no errors. Only happens in Windows 8: not any other OS that has used the same partition, or different partitions. So, what is going on? How can I fix this? Note: this is a different problem then this one: Extremely high disk activity without any real usage My task manager would look similar, but Average Response Time, Read Speed, and Write Speed would all be 0.

    Read the article

  • Rebuild mdadm RAID5 array with fewer disks

    - by drjeep
    I have a 4 disk RAID5 array, one of which is starting to fail according to smartd. However, since I'm using less than half the space on /dev/md0, I'd like to rebuild the array without the failing disk. The closest scenario I've been able to find online has been this post, however it contains bits that don't apply to me (LVM volumes) and also doesn't explain how I go about resizing the partition after I'm done. Please note I have backups of important data, but I'd like to avoid rebuilding the array from scratch if possible.

    Read the article

  • Backing up to USB drives in 2008 R2

    - by jbbarnes
    I set up backups in 2008 R2 to backup to a USB drive, and the backup program formatted the drive so it doesn't appear with a drive letter. Rotating to the next of four backup drives breaks the backups because it has an NTFS partition on it. I don't see a way to prepare each of my USB drives to be used for backups. What kind of formatting is the backup program expecting to find and write to and how do I ensure my other USB drives can be rotated? Thanks.

    Read the article

  • Can I move files from a laptop hard drive with a corrupt sector to a USB hard drive?

    - by Corey
    I have a hard drive that is on its way out and won't boot to Windows 7. The Windows partition takes up the whole disk. I thought I would try to recover some recent files that hadn't been backed up. Assuming the files are recoverable, how can I explore the drive that has the corrupt sector and transfer files to a USB hard drive? If it helps, the laptop is able to see the USB drive when choosing a boot order. Some searching lead me to WinPE 3.0, part of the Windows Automated Install Kit. Is that a method?

    Read the article

  • Is there a way to measure wifi traffic on a network from a client?

    - by millimoose
    Is there some way (preferrably one that comes with an existing tool) to measure the traffic going through the whole WiFi network from a computer connected to it? (That is, not from the AP or something between the modem and AP.) My situation is this: a few months back, the internet connection at my parent's place got really sluggish and laggy. (Lag spikes that cause page loads to time out etc, connections plain getting lost and dropping packets forever.) It's impossible to get mom's husband to do anything about this because he brushes this off with something like "just tell your sister to turn off torrents". Unfortunately the WiFi router's firmware doesn't do traffic logging. I'm not going to risk bricking it to put WRT on it; nor am I keen on rewiring the network to add a proxy to analyse the traffic. (I'm one of those people that make computers break just by looking at them, except machines I own.) I'd like to be able to find out roughly how much data is going over the air here while all the LAN wires are out of the router, all the computers accused of torrenting are off, etc. The idea is to either show that: Even if everything but my macbook is turned off, something is congesting the network. The husband is a systems developer and has a whole lot of mysterious hardware that's not to be touched around, one of them might be culprit. There is barely any traffic on the network, but the internet is still sluggish. Meaning this is likely a problem the ISP should solve. (Some hardware of theirs being glitchy, someone on an aggregated line hogging it constantly...) The network is encrypted, but I can temporarily set it to open for the sake of finding this out. So, in conclusion? Can this be done? Or is there some alternative way I could try to diagnose the problem?

    Read the article

  • Using windows 7 and fedora

    - by vedant1811
    I need to partition my hard disk for windows and fedora (root, swap, users). I thought of creating 3 (primary drives). 2 small ones (~5GB) each for win7 and fedora and a large (~700GB) for common files (pictures, vidoes, documents, etc.) one. Please Tell me which file systems to use in each case and the set-up of primary and extended partitions. Also I want to know, where and on which file system should the Linux Grub (my choice of OS chooser) be installed. I have just a bought a new Asus K53S and using the Fedora Installer Partitioner (Anaconda). Your help is greatly appreciated.

    Read the article

  • Recover snap server data

    - by Ugg
    Hi I have a snap server 110 the machine powers on ok and the healthcheck passes but unable to connect no responce on the assigned ip or any ability to reach the device via the snap server manager. Believe the device is powering on but not loading the OS. Tried pulling the disk running and hooking up to a windows PC via USB, and using disk internals linux reader I am unable to access two of the partitions. ( one of which is the large data partition). There are three partitions on the the drice only one is accessible via Linux reader. I am looking to recover the data of the drive can anyone suggest a DIY option please?

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • Ubuntu 12.04LTS mountall: Disconnected from Plymouth

    - by user169954
    I have ubuntu 12.04LTS 64bit running on an i5 dual core 8G RAM. On startup I get the message mountall: Disconnected from Plymouth [OK] And the system looks stuck. However, if I go to tty1, then I can login and startx and everything seems to be fine except for being a bit sluggish. I can verify that my nfs mounts are ok, and that my swap is ok. Every time I reboot the system there is a _gdm_gdm_crash file in my /var/crash, which makes me think my problem is rooted in gdm, X configs and/or nvidia drivers. A bit of background in case it's relevant: 3 hours ago my desktop crashed. Following various 'tips' on the web I made a complete mess of my X server and X configuration files, and at one point I even had to recreate my swap partition. Anyway, after much struggle I managed to get to the state I mentioned above: I have a working system provided I always login through tty1. What is this Plymouth anyway? Would it make a difference if I used gnome-wm instead of gdm or lightdm? (I mean to the startup, not to me :-) What bit of config do I change to tell startx to use gnome-wm not gdm or ligthdm? Thank you in advance

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Installation stuck on "Installation Type" screen

    - by Andrew Latham
    I am trying to install Ubuntu 11.10 with Windows 7 from a CD. I am using an HP Pavilion dm4. I've never used Ubuntu (or any Linux) before. Everything goes alright until I get to the "Installation Type" screen. Instead of giving me options, it just has a blank menu, and all the buttons are disabled. When I click "Continue", it gives me an error saying that it can't find the root or something like that. The trial version works fine, but I can't actually install it. Everything on the trial version is really slow, presumably because everything is on the CD or the Windows partition. I did some research, but the only post I could find was http://ubuntuforums.org/showthread.php?t=1870478 Where the only advice is to format the entire drive, which I'm not willing to do. Any suggestions? I'm downloading 10.04 right now and I'm going to try with that instead. EDIT: 10.04 didn't work either. I got to the partitioning screen and got the same problem. I read some more forums, loaded up 11.10 trial from the disk, opened the Terminal and typed sudo apt-get remove dmraid and then y. Then I was actually able to see something on the "Installation type" page: "Erase disk and install Ubuntu" or "Something else". Which is weird, since Windows 7 should be installed. When I click Something Else, I get: /dev/sda /dev/sdb /dev/sdb1 (ntfs) (208 MB) (69 MB used) /dev/sdb2 (ntfs) (477542 MB) (unknown used) /dev/sdb3 (ntfs) (18085 MB) (16094 MB used) /dev/sdb4 (fat32) (4265 MB) (3084 MB used) I have no idea what any of this means. Also, my device for boot loader installation changed from /dev/sda to /dev/sda ATA SAMSUNG MZMPA032 (32.0 GB)

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • I can't finish installation of 12.10 on Windows 8

    - by Janna Zhou
    When I followed the instruction to install Ubuntu 12.10 with wubi, the come out window ask me to reboot the computer to finish the installation. I followed the instruction and stopped with the message I have left below. Could you please tell me how to finish installation? I'm worried if I follow the instruction below to remove_hiberfile in Windows it would result in a failure to boot my Window 8 system: Completing the Ubuntu installation. For more installation boot options, press 'ESC'now... 0 Busy Box v1.19.3 (Ubuntu 1:1.19.3-7ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) windows is hibernated, refused to mount. Failed to mount '/dev/sda2/: Operation not permitted the NTFS partition is hibernated. Please resume and shutdown Windows properly, or mount the volume read-only with the 'ro' mount option, or mount the volume read-write with the 'remove_hiberfile' mount option. For example type on the command line: mount -t ntsfs-3g -0 remove_hiberfile/dev/sda2/isodevice mount:mounting/dev/sda2 on /isodevice failed: No such device Could not find the ISO/ubuntu/install/installation.iso This could also happen if the file system is not clean because of an operating system crash, an interrupted boot process, an improper shutdown, or unplugging of a removable device without first unmounting or ejecting it. To fix this, simply reboot into windows, let it fully start, log in, run 'chkdsk/r', then gracefully shut down and reboot back into Windows. After this you should be able to reboot again and resume the installation.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Random touchpad and keyboard freezes on new installation

    - by ancaleth
    My touchpad and keyboard freeze up on my newly installed Ubuntu 10.10. They remain frozen until you shut down manually. No keys work, cursor doesn't move - it's like a screenshot. I was using Ubuntu 10.4 via wubi before on this Laptop where this problem never occurred. (I did not migrate wubi or upgrade to 10.10, it's a fresh start. 64-bit on Dell Studio, plenty of RAM, plenty of free space on partition etc.) I can't say there is a pattern yet, once it happened during the download of packages with the Update Manager, once it was just using Firefox, no other program running. In between these crashes the laptop was booted once, updates were installed etc., firefox was used and there weren't any problems. Both crashes should be in the attached kern.log and I noticed there were some error problems before the last crash (at the end, obviously). It seems the wireless was experiencing problems. This wasn't noticed on the user end, since the touchpad + keyboard were already frozen. kern.log: http://paste.ubuntu.com/552617/ How can the freezes be fixed? Edit: I will try Ctrl+Alt+F1 and then Ctrl+Alt+F7 when next freeze occurs, to see if it works again after this, as suggested here. But the keyboard seemed pretty frozen to me.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >