Search Results

Search found 657 results on 27 pages for 'revert'.

Page 6/27 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Ubuntu 12.04 to 14.04 LTS upgraded but crashed soon after restarting the system

    - by LEELA MANOJ N
    In Ubuntu 12.04 the update manager offered me to upgrade it to 14.04, so i kept it for upgrading for whole night and after all the downloading, fetching and removing of packages it asked for restart. So i restarted the system that's it from that time there is no log in page, nothing is coming and just a black command prompt prompting initramfs is appearing (BUSY BOX or something). How can i solve this issue ? or there is way to revert back to 12.04? please help?

    Read the article

  • xmodmap reverting periodically

    - by JediBrooker
    I'm using xmodmap to swap control and command keys on my macbook pro. However, periodically the key revert back to their original state and this is becoming quite annoying. I'm on Ubuntu 13.10 and I can remember when this situation started occurring was when the system got a keyboard settings (in the system settings) update. Any ideas as to how to either: 1) delete the keyboard settings, or 2) stop the keyboard settings from reverting my keys ??? Cheers!

    Read the article

  • SQL Azure - Creating backups and copies of your databases

    As a DBA you always followed a practice to back up your database (or take a snapshot of your database) before making any changes so that you can revert to your old database state if something goes wrong. Also to setup a development or test environment you use a backup of your database and restore it in the respective environment. If you are moving to SQL Azure, what would you do in these cases as backup / restore and database snapshots are not supported as of now?

    Read the article

  • How to set Fn+F2 to show battery's status throug OSD and not power statistics?

    - by papukaija
    In natty pressing Fn+F2 on my Samsung NC10 opened a new notification with the remaining battery power. After upgrading to Oneiric, it opens the power statistics. Is the a way to revert this change? Checking the battery status with the notification is much faster than finding it from the power statistics. I know that the remaining battery time can be set to be shown on the panel but I'm used to Fn+F2.

    Read the article

  • Cannot access Windows via grub after new dual-boot Ubuntu install

    - by user287474
    I previously installed Ubuntu on a computer that had Windows XP on it and got it successfully to run alongside it. (access both OS systems) Now I installed Ubuntu again on my MSI notebook with Windows 8.1, and I cannot access the GRUB without hitting escape on startup, and even then, I can no longer open windows. Before all of this, I created a recovery point, file history and saved a back up on a external hard drive incase I did anything wrong. Now how can I revert my computer back to it's state before installation.

    Read the article

  • Primary IDE Channel: Ultra DMA Mode 5 >> PIO Mode

    - by Wesley
    Hi, my netbook was having huge audio lag and just abnormally slow processing. After doing some searching on the internet, I found out that I needed to uninstall/reinstall the Primary IDE Channel found under the IDE controller section in the Device Manager. I would then set the Transfer Mode to DMA if available and everything would be great. For a period of time, I would see that "Ultra DMA Mode 5" was the current transfer mode, but every so often, it'd revert back to "PIO Mode", which is when it's really laggy. What can I do to prevent the Primary IDE Channel to revert from Ultra DMA Mode to PIO Mode? Also, my netbook has BSODed a few times when it is in PIO Mode, without any real explanation. I have a Samsung N120. Specs are as follows: http://www.samsung.com/ca/consumer/office/mobile-computing/netbook/NP-N120-KA01CA/index.idx?pagetype=prd_detail&tab=spec&fullspec=F. Only difference is that I have upgraded to 2.0 GB of DDR2 RAM. EDIT: For all who are looking for an answer to this problem, click the link in Kythos's answer and look at number 6 (Re-enable DMA using the Registry Editor). This always works for me now. If on reboot, you seem to only have a black screen after XP is loading, just wait... it is still loading and will show signs of life after 2-3 minutes.

    Read the article

  • Does the mysql Client API Library version have to match the installed MySQL/Percona server version?

    - by William Jamieson
    I'm running Scientific Linux 6.3 (binary compaible with Redhat/CentOS/etc..) as a LAMP stack. I've installed Percona server and client v5.5 from the Percona yum repository. However when I run phpinfo() I notice that under the MySQL and mysqli sections, it lists the Client API Library version as 5.1.66, and not 5.5x. I'm guessing these need to match, at least to major versions, and I have no idea what the possible consequences of such a mismatch could be. Do I need to revert to Percona server and client v5.1? This is for a production environment so it needs to be right. I'd appreciate any input or experience people could offer. I'm running Scientific Linux 6.3 (binary compaible with Redhat/CentOS/etc..) as a LAMP stack. I've installed Percona server and client v5.5 from the Percona yum repository. However when I run phpinfo() I notice that under the MySQL and mysqli sections, it lists the Client API Library version as 5.1.66, and not 5.5x. I'm guessing these need to match, at least to major versions, and I have no idea what the possible consequences of such a mismatch could be. Do I need to revert to Percona server and client v5.1? This is for a production environment so it needs to be right. I'd appreciate any input or experience people could offer. (Note I will also be cross posting this on the Percona forums)

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Increase the size of Taskbar Preview Thumbnails in Windows 7

    - by Matthew Guay
    Taskbar thumbnail previews are incredibly useful in Windows 7, but for some users they may be too small.  Here’s a tool to help you make your taskbar thumbnail previews just like you want them. A few years ago we featured a tool to increase the size of your thumbnail previews in Windows Vista, but unfortunately this application doesn’t work correctly in Windows 7.  However, there is a new tool for Windows 7 that lets you customize your taskbar thumbnail previews even more in Windows 7.  With it, you can change almost anything about your taskbar thumbnail previews.  The default taskbar thumbnails are nice, but may be too small for users with vision problems or with very high resolution monitors.  Whatever your need, this is a great tool to make the thumbnails looks and work just like you want. Let’s get started Download the Windows 7 Taskbar Thumbnail Customizer (link below), and unzip the files.  Run the Windows 7 Taskbar Thumbnail Customizer when you’re done.  Simply double-click on it; you don’t need to run it as administrator. Now, you change the size, spacing, margin, and delay time of your taskbar thumbnails.  The Delay Time setting is very handy; to speed things up, we set it to 0 so there’s no delay between when you mouse-over a taskbar icon to when you see the thumbnail.  Simply drag the slider to the size (or time in the delay settings) you want, and click Apply settings.  Windows Explorer will automatically restart, and your new taskbar thumbnails will be ready to use. Here is the default Windows 7 thumbnail preview of a video playing in Media player: And here’s the taskbar thumbnail enlarged to 380px.  Now you can really watch a video from your taskbar thumbnail. The larger taskbar thumbnails show up a little different in Internet Explorer.  It shows a larger preview of your active tab, and smaller previews of your other tabs.  Notice also that Aero peek shows the tab you’re hovering over in Internet Explorer, but the tab name in IE’s toolbar doesn’t change to the one you’re previewing.   Here we increased the width between the thumbnails, while keeping the thumbnails at their default size.  This could be useful if you have trouble selecting the correct preview, and we can imagine it would be a very useful modification on touch screens. And, if you ever take your changes too far, and want to revert to your default Windows 7 taskbar thumbnail previews, simply run the Customizer again and select Restore Defaults.  Windows Explorer will restart again, and your taskbar thumbnails will be back to their default settings.   Conclusion This tool makes it safe and easy to change the size, spacing, and more of your taskbar thumbnail previews.  And since you can always revert to the default settings, you can experiment without fear of messing up your computer.  If you’d prefer to change the settings manually without using a dedicated application, here’s a list of the registry changes you can make to accomplish this by hand. Link Download the Windows 7 Taskbar Thumbnail Customizer from The Windows Club Vista Users: Increase Size of Windows Vista Taskbar Previews Similar Articles Productive Geek Tips Bounty(Paid!) for Increasing Windows Vista Taskbar Preview SizeGet Vista Taskbar Thumbnail Previews in Windows XPVista Style Popup Previews for Firefox TabsIncrease Size of Windows Vista Taskbar PreviewsWhat is dwm.exe And Why Is It Running? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos

    Read the article

  • Best way to "un-promote" files in Accurev?

    - by Luke Rinard
    My company uses Accurev for source control, and for all its benefits, there's one simple action that I just can't figure out how to accomplish. Often we have someone accidentally push a file up too far in our stream structure -- from the "Development" stream to the "Release" stream, for example. What is the best way to "un-promote" this file? That is to say, to get the old version of the file back into the "Release" stream, and keep the new version of the file in the "Development" stream, where it belongs? Just doing a "Revert to Backed" or other Revert action on the file in the Release stream will either cause an old version of the file to propagate down into Development, or will make the file disappear entirely. In the above case, the developer will have to jump through hoops with setting basis times on streams, or use the command line tool to do a checkout of an old transaction, to get the file back. Sometimes the people in question are non-technical, so this is not a good solution. I have also considered moving the files to a "higher ground" stream, reverting, and then cross-promoting them to the lower stream again. This seems really kludgy. It seems like Accurev is obscure enough that Google is no help, so I turn to the good folks of StackOverflow for help -- has anybody figured out the "Accurevy" way to accomplish this?

    Read the article

  • Forcing file redirection on x64 for a 32-bit application

    - by Paul Alexander
    The silent redirection of 64-bit system files to their 32-bit equivalents can be turned off and reverted with Wow64DisableWow64FsRedirection and Wow64RevertWow64FsRedirection. We use this for certain file identity checks in our application. The problem is that in performing some of theses tasks, we might call a framework or Windows API in a DLL that has not yet been loaded. If redirection is enabled at that time, the wrong version of the dll may be loaded resulting in a XXX is not a valid Win32 application error. I've identified the few API calls in question and what I'd like to do force the redirection on for the duration of that call then revert it back - just the opposite of the provided Win32 APIs. Unfortunately these calls do not provide any sort of WOW64 compatibility flag like some of the registry methods do. The obvious alternative is to use Wow64EnableWow64FsRedirection, pass TRUE for Wow64FsEanbledRedirection. However there are a variety of warnings about the use of this method and a note that it is not compatible with Disable/Revert combo methods that have replaced it. Is there a safe way to force redirection on for a give Win32 call? The docs state the redirection is thread specific so I've considered spinning up a new thread for the specific call with appropriate locks and waits, but I was hoping for a simpler solution.

    Read the article

  • Javascript conversion, from Prototype to jQuery

    - by moshimoshi
    Hi, I would like to update the following javascript code based on Prototype framework to jQuery framework: Event.observe(window, 'load', function() { $$('.piece').each(function(item) { new Draggable(item, { revert: true } ); }); $$('.cell').each(function(item) { Droppables.add(item, { accept: 'piece', onDrop: function(piece, cell) { cell.descendants().each(function(item) { item.remove(); } ); piece.remove(); piece.setStyle({ 'top': null, 'left': null }); new Draggable(piece, { revert: true }); cell.appendChild(piece); } }); }); }); The first part of the script is easy to convert: $(function() { $('.piece').draggable( { evert: true } ); $('.cell').droppable( { /* But here, it's more difficult. Right? ;) ... */ } }); }); Have you got an idea? Any part of code is welcome. Thanks a lot.

    Read the article

  • Cheap cloning/local branching in Mercurial

    - by Zack
    Hi, Just started working with Mercurial a few days ago and there's something I don't understand. I have an experimental thing I want to do, so the normal thing to do would be to clone my repository, work on the clone and if eventually I want to keep those changes, I'll push them to my main repository. Problem is cloning my repository takes alot of time (we have alot of code) and just compiling the cloned copy would take up to an hour. So I need to somehow work on a different repository but still in my original working copy. Enter local branches. Problem is just creating a local branch takes forever, and working with them isn't all that fun either. Because when moving between local branches doesn't "revert" to the target branch state, I have to issue a hg purge (to remove files that were added in the moved from branch) and then hg update -c (to revert modified files in the moved from branch). (note: I did try PK11 fork of local branch extension, it a simple local branch creation crashes with an exception) At the end of the day, this is just too complex. What are my options?

    Read the article

  • jQuery: Preventing an event from being attached more than once?

    - by Evan Carroll
    Essentially, I have an element FOO that I want when clicked to attach a click event to a completely separate set of elements BAR, so that when they're clicked they can revert FOO to its previous content. I only want this event attached once. When FOO is clicked, its content is cached away in $back_up, and a trigger is added on the BAR set so that when clicked they can revert FOO back to its previous state. Is there a clever way to do this? Like to only .bind() if the event doesn't already exist? $('<div class="noprint little check" />').click( function () { var $warranty_explaination = $(this).closest('.page').children('.warranty_explaination'); var $back_up = $warranty_explaination.clone(true); $(this).closest('.page').find('.warranties .check:not(.noprint)').click( function () { /* This is the code I don't want to fire more than once */ /*, I just want it to be set to whatever is in the $back_up */ alert('reset'); $warranty_explaination.replaceWith( $back_up ) } ); $warranty_explaination.html('asdf') } ) Currently, the best way I can think to do this is to attach a class, and select where that class doesn't exist.

    Read the article

  • Bind two images together to be dragged

    - by Ryan Beaulieu
    I'm looking for some help with a script to drag two images together at once. I currently have a script that allows me to drag thumbnail images into a collection bin to be saved later. However, some of my thumbnails have an image positioned over the top of them to represent these thumbnail images as "unknown" plants. I was wondering if someone could point me in the right direction as to how I would go about binding these two images together to be dragged. Here is my code: $(document).ready(function() { var limit = 16; var counter = 0; $("#mainBin1, #mainBin2, #mainBin3, #mainBin4, #mainBin5, #bin_One_Hd, #bin_Two_Hd, #bin_Three_Hd, #bin_Four_Hd, #bin_Five_Hd").droppable({ accept: ".selector, .plant_Unknown", drop: function(event, ui) { counter++; if (counter == limit) { $(this).droppable("disable"); } $(this).append($(ui.draggable).clone()); $("#cbOptions").show(); $(".item").draggable({ containment: "parent", grid: [72,72], }); } }); $(".selector").draggable({ helper: "clone", revert: "invalid", revertDuration: 700, opacity: 0.75, }); $(".plant_Unknown").draggable({ helper: "clone", revert: "invalid", revertDuration: 700, opacity: 0.75, }); }); Any help would be greatly appreciated. Thanks. EDIT: Website

    Read the article

  • Hide usernames shown on Windows Server 2008 Remote Desktop login screen

    - by user38553
    When I remote desktop to my Windows Server 2008 (a hosted virtual server) I see a login screen showing an icon for each user in the system. I can click on a user then enter a password and login. This is a terrible security oversight in my opinion as it gives anyone that might want to compromise my server a full list of valid usernames. Is there a way to revert to the old style of login screen requiring both username and password? Thanks

    Read the article

  • Restore sqlite3 on Mac OS X for Google Chrome

    - by gaearon
    I was stupid enough to compile sqlite3 from source and install it to /usr, overriding default library. This being done, Google Chrome doesn't launch anymore, crashing with this output: Dyld Error Message: Library not loaded: /usr/lib/libsqlite3.dylib Referenced from: /System/Library/Frameworks/Security.framework/Versions/A/Security Reason: no suitable image found. Did find: /usr/lib/libsqlite3.dylib: mach-o, but wrong architecture /usr/local/lib/libsqlite3.dylib: mach-o, but wrong architecture /usr/lib/libsqlite3.dylib: mach-o, but wrong architecture Can I somehow revert sqlite3 to the original version I had, or fix the issue somehow else?

    Read the article

  • Does DFSR replicate shadow copies?

    - by Jeff Sacksteder
    There a a few questions related to using DFSR and Shadow Copies together, but none that indicates if Shadow Copies replicate or not. Meaning, if I have a a pair of DFS replicas with Shadow Copies on Server-A, can I revert that file to a previous version on Server-B? If so, will that reversion be replicated back to Server-A? I suspect not- that VSS is a local NTFS feature and outside the scope of replication, but I cannot verify that myself at the moment.

    Read the article

  • Multiple mouse settings on laptop?

    - by Jonas
    I'm wondering if this is possible. I use a macbook, running windows 7...when I don't have an external mouse connected, I leave the left/right settings of the buttons to the default. But when I plug in a mouse, I change that setting (since I use the mouse left-handed). When I unplug the mouse, I would like it to revert to the "unplugged" setting. Is there a way to get windows to remember mouse settings based on which device it's using?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >