Search Results

Search found 64522 results on 2581 pages for 'windows features'.

Page 714/2581 | < Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >

  • Plugged in, not charging&ndash;Windows 7

    - by Kelly Jones
    Just a quick post on something I ran into lately with my Dell Precision M4500 laptop (monster laptop!).  I noticed the little icon in the system tray for the power options was stating that it was “plugged in, not charging”.  I don’t know why it was stating this, but I quickly found a fix for it on the net. I found the fix in this forum on CNET.  Here’s the fix: In order to correct problems with the battery's power management software, follow the steps below. 1. Click Start and type device in the search field, then select Device Manager . 2. Expand the Batteries category. 3. Under the Batteries category, right-click the Microsoft ACPI Compliant Control Method Battery listing, and select Uninstall . WARNING: Do not remove the Microsoft AC Adapter driver or any other ACPI compliant driver. 4. On the Device Manager taskbar, click Scan for hardware changes . Alternately, select Action > Scan for hardware changes . Windows will scan your computer for hardware that doesn't have drivers installed, and will install the drivers needed to manage your battery's power. The notebook should now indicate that the battery is charging.   And it did work.

    Read the article

  • 12.10 install overwrote my windows partition

    - by Niall C
    Recently decided to switch back to Ubuntu. I have a 3TB drive which was running win7. I had 3 partitions. c: for windows d: data e: data Have installed ubuntu before so 'thought' I knew what I was doing. I using netbootin I installed from a usb stick. I didn't choose the default options but I didn't choose the 'manual install' either. I can't remember what option I took but I figured at some stage it would tell me how it was going to partition the disk and at that stage I would see if it had recognised the NTFS partitions and I would be able to abort if it didn't. Unfortunately, it didn't and just went ahead and installed Ubuntu and made up it's own mind on how it was going to partition the disk. Usual story, the two NTFS data partitions weren't backed up. Is there anything I can do to retrieve the ntfs data? I'm currently trying out testdisk and I know I can use photodisk to retrieve certain file types but all the filenames will be lost and it's going to take a hell of a lot of time to rename them all. Any help or assistance would be more than gratefully accepted. Thanks in advance, Niall

    Read the article

  • Unable to detect windows hard drive while running Ubuntu 12.04 from USB

    - by eapen jacob
    I am completely new to Ubuntu. I experimented with Ubuntu 12.04 by running it from a USB drive, in-order to recover files from my hard disc. History: My laptop is an IBM R60 running windows 7. Suddenly it gave me an error stating "error 2100 - Hard drive initialization error". I have read all the forums and most of them suggested that I remove and replace my HDD and that did not work. And one site suggested to try using Ubuntu to recover files. I booted my system from USB, and once Ubuntu came up, I choose "Try Ubuntu". It came up fine and I was able to surf ,and do other things, etc. I was unable to to access my files which are on the hard disc and "Attached Devices" is grayed out. 1- Is there any way to gain access to my hard disc to recover the files? How do I navigate to search for my files. 2- Is it just simply not possible if the hard disc themselves are not working? Is that why I`m unable to find the drives. I know its a very novice question, but hoping someone would help me out. Thank you, Eapen

    Read the article

  • Java Program Compilaton on Windows [closed]

    - by Mc Elroy
    I am trying to compile my program on the command line on windows using the java command and it says: Error: could not find or load main class or addition class It is for a program for adding two integers. I don't understand how to resolve the problem since I defined the static main class in my source code here is it: //Filename:addition.java //Usage: this program adds two numbers and displays their sum. //Author: Nyah Check, Developer @ Ink Corp.. //Licence: No warranty following the GNU Public licence import java.util.Scanner; //this imports the scanner class. public class addition { public static void main(String[] args) { Scanner input = new Scanner(System.in);//this creates scanners instance to take input from the input. int input1, input2, sum; System.out.printf("\nEnter First Integer: "); input1 = input.nextInt(); System.out.printf("\nEnter Second Integer: "); input2 = input.nextInt(); sum = input1 + input2; System.out.printf("\nThe Sum is: %d", sum); } }//This ends the class definition

    Read the article

  • Slight stuttering when moving windows in fresh 12.04 install

    - by Konsolkongen
    Installed Ubuntu 12.04 today and my problem is that when I'm moving the windows around my screen it doesn't feel smooth at all. Usually I can fix this by changing the refresh rate to 60Hz, but this time it doesn't help. My graphics card is a Nvidia GTX 560Ti and I've tried both the 295.40, 295.45 and 304.43 (which I'm currently using) but neither has resolved my problem. I searched around a bit and tried changing the refresh-rate using compizconfig-settings-manager and xrandr. No change using CCSM, but when I tried xrandr I got this reply: konsolkongen@konsolkongen-desktop:~$ xrandr -r 60Rate 60.0 Hz not available for this size - which is nonsense of course. This is what my xorg.conf file looks like: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: 1680x1050_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Any help would be greatly appreciated, my obsession with video quality can't stand stuttering like this. For what it's worth though, I don't have any screen tearing, so at least V-sync is on. Thanks.

    Read the article

  • Epson OPOS ADK for .NET drivers for windows 7

    - by Xience
    Has anyone used Epson OPOS ADK for .NET for windows 7. i tried to install windows vista drivers on windows 7, since there are none available for windows 7, but it did not work. Please share any suggestions or ideas that might have worked for you. I am using a TM-88IV receipt printer

    Read the article

  • Run SQL scripts on windows mobile installer

    - by Guillermo Vasconcelos
    Hi, We are working on a Windows Mobile 6.5 application. The application is already installed in some devices, and we need to distribute a new version with changes in the database schema (we added a few tables). Is there a way to make a "patch" windows mobile installer that will replace the application and update the embedded SQL database with some scripts? In a normal windows installer we would create a custom action in the installation process to apply the changes in the database, but I'm not sure how to do that for Windows Mobile Thanks.

    Read the article

  • Windows Journal Eraser functionality

    - by Nitz
    Hey guys I want to add Eraser functionality in my software. I look down in the SDK of Windows Tablet PC SDK but i can't find any solution for that. there is one eraser sample in that SDK but that is not similar to feature which is in Windows Journal. DOES ANYONE KNOW HOW TO ADD THAT ERASER FEATURE IN MY C# APPLICATION [I WANT SAME AS IN WINDOWS JOURNAL]? btw i am taking reference from Microsoft Windows XP Tablet PC Edition Software Development Kit 1.7.

    Read the article

  • Passing windows credentials through web application, to WCF

    - by IP
    I've checked other questions, but I can't find a working answer I have a .Net web application which successfully takes on the callers windows credentials (Thread.CurrentPrincipal is my windows user). Within that app, I call to a WCF service, but my windows identity isn't passed up. Regardless of what I put in the binding: NetTcpBinding binding = new NetTcpBinding(); binding.Security.Mode = SecurityMode.Transport; binding.Security.Transport.ClientCredentialType = TcpClientCredentialType.Windows;

    Read the article

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    - by Jeff
    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever. When you run your Web app on one box, HttpRuntime.Cache is a sweet and stupid-simple solution. Somewhere in the data fetching pieces of your app, you can see if an object is available in cache, and return that instead of hitting the data store. I did this quite a bit in POP Forums, and it dramatically cuts down on the database chatter. The problem is that it falls apart if you run the app on many servers, in a Web farm, where one server may initiate a change to that data, and the others will have no knowledge of the change, making it stale. Of course, if you have the infrastructure to do so, you can use something like memcached or AppFabric to do a distributed cache, and achieve the caching flavor you desire. You could do the same thing in Azure before, but it would cost more because you’d need to pay for another role or VM or something to host the cache. Now, you can use a portion of the memory from each instance of a Web role to act as that cache, with no additional cost. That’s huge. So if you’re using a percentage of memory that comes out to 100 MB, and you have three instances running, that’s 300 MB available for caching. For the uninitiated, a Web role in Azure is essentially a VM that runs a Web app (worker roles are the same idea, only without the IIS part). You can spin up many instances of the role, and traffic is load balanced to the various instances. It’s like adding or removing servers to a Web farm all willy-nilly and at your discretion, and it’s what the cloud is all about. I’d say it’s my favorite thing about Windows Azure. The slightly annoying thing about developing for a Web role in Azure is that the local emulator that’s launched by Visual Studio is a little on the slow side. If you’re used to using the built-in Web server, you’re used to building and then alt-tabbing to your browser and refreshing a page. If you’re just changing an MVC view, you’re not even doing the building part. Spinning up the simulated Azure environment is too slow for this, but ideally you want to code your app to use this fantastic distributed cache mechanism. So first off, here’s the link to the page showing how to code using the caching feature. If you’re used to using HttpRuntime.Cache, this should be pretty familiar to you. Let’s say that you want to use the Azure cache preview when you’re running in Azure, but HttpRuntime.Cache if you’re running local, or in a regular IIS server environment. Through the magic of dependency injection, we can get there pretty quickly. First, design an interface to handle the cache insertion, fetching and removal. Mine looks like this: public interface ICacheProvider {     void Add(string key, object item, int duration);     T Get<T>(string key) where T : class;     void Remove(string key); } Now we’ll create two implementations of this interface… one for Azure cache, one for HttpRuntime: public class AzureCacheProvider : ICacheProvider {     public AzureCacheProvider()     {         _cache = new DataCache("default"); // in Microsoft.ApplicationServer.Caching, see how-to      }         private readonly DataCache _cache;     public void Add(string key, object item, int duration)     {         _cache.Add(key, item, new TimeSpan(0, 0, 0, 0, duration));     }     public T Get<T>(string key) where T : class     {         return _cache.Get(key) as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } public class LocalCacheProvider : ICacheProvider {     public LocalCacheProvider()     {         _cache = HttpRuntime.Cache;     }     private readonly System.Web.Caching.Cache _cache;     public void Add(string key, object item, int duration)     {         _cache.Insert(key, item, null, DateTime.UtcNow.AddMilliseconds(duration), System.Web.Caching.Cache.NoSlidingExpiration);     }     public T Get<T>(string key) where T : class     {         return _cache[key] as T;     }     public void Remove(string key)     {         _cache.Remove(key);     } } Feel free to expand these to use whatever cache features you want. I’m not going to go over dependency injection here, but I assume that if you’re using ASP.NET MVC, you’re using it. Somewhere in your app, you set up the DI container that resolves interfaces to concrete implementations (Ninject call is a “kernel” instead of a container). For this example, I’ll show you how StructureMap does it. It uses a convention based scheme, where if you need to get an instance of IFoo, it looks for a class named Foo. You can also do this mapping explicitly. The initialization of the container looks something like this: ObjectFactory.Initialize(x =>             {                 x.Scan(scan =>                         {                             scan.AssembliesFromApplicationBaseDirectory();                             scan.WithDefaultConventions();                         });                 if (Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.IsAvailable)                     x.For<ICacheProvider>().Use<AzureCacheProvider>();                 else                     x.For<ICacheProvider>().Use<LocalCacheProvider>();             }); If you use Ninject or Windsor or something else, that’s OK. Conceptually they’re all about the same. The important part is the conditional statement that checks to see if the app is running in Azure. If it is, it maps ICacheProvider to AzureCacheProvider, otherwise it maps to LocalCacheProvider. Now when a request comes into your MVC app, and the chain of dependency resolution occurs, you can see to it that the right caching code is called. A typical design may have a call stack that goes: Controller –> BusinessLogicClass –> Repository. Let’s say your repository class looks like this: public class MyRepo : IMyRepo {     public MyRepo(ICacheProvider cacheProvider)     {         _context = new MyDataContext();         _cache = cacheProvider;     }     private readonly MyDataContext _context;     private readonly ICacheProvider _cache;     public SomeType Get(int someTypeID)     {         var key = "somename-" + someTypeID;         var cachedObject = _cache.Get<SomeType>(key);         if (cachedObject != null)         {             _context.SomeTypes.Attach(cachedObject);             return cachedObject;         }         var someType = _context.SomeTypes.SingleOrDefault(p => p.SomeTypeID == someTypeID);         _cache.Add(key, someType, 60000);         return someType;     } ... // more stuff to update, delete or whatever, being sure to remove // from cache when you do so  When the DI container gets an instance of the repo, it passes an instance of ICacheProvider to the constructor, which in this case will be whatever implementation was specified when the container was initialized. The Get method first tries to hit the cache, and of course doesn’t care what the underlying implementation is, Azure, HttpRuntime, or otherwise. If it finds the object, it returns it right then. If not, it hits the database (this example is using Entity Framework), and inserts the object into the cache before returning it. The important thing not pictured here is that other methods in the repo class will construct the key for the cached object, in this case “somename-“ plus the ID of the object, and then remove it from cache, in any method that alters or deletes the object. That way, no matter what instance of the role is processing the request, it won’t find the object if it has been made stale, that is, updated or outright deleted, forcing it to attempt to hit the database. So is this good technique? Well, sort of. It depends on how you use it, and what your testing looks like around it. Because of differences in behavior and execution of the two caching providers, for example, you could see some strange errors. For example, I immediately got an error indicating there was no parameterless constructor for an MVC controller, because the DI resolver failed to create instances for the dependencies it had. In reality, the NuGet packaged DI resolver for StructureMap was eating an exception thrown by the Azure components that said my configuration, outlined in that how-to article, was wrong. That error wouldn’t occur when using the HttpRuntime. That’s something a lot of people debate about using different components like that, and how you configure them. I kinda hate XML config files, and like the idea of the code-based approach above, but you should be darn sure that your unit and integration testing can account for the differences.

    Read the article

  • Error 1053: the service did not respond to the start or control request in a timely fashion

    - by deejjaayy
    i know this is very much a "how long is a piece of string" type of question, however i have recently inherited a couple of applications that run as windows services, and i am having problems providing a gui (accessible from a context menu in system tray) with both of them. before you ask, the reason why we need a gui for a windows service is in order to be able to re-configure the behaviour of the windows service(s) without resorting to stopping/re-starting. my code works fine in debug mode, and i get the context menu come up, and everything behaves correctly etc. when i install the service via "installutil" using a named account (i.e., not Local System Account), the service runs fine, but doesn't display the icon in the system tray (i know this is normal behaviour because i don't have the "interact with desktop" option). here is the problem though - when i choose the "LocalSystemAccount" option, and check the "interact with desktop" option, the service takes AGES to start up for no obvious reason, and i just keep getting "Could not start the ... service on Local Computer. Error 1053: the service did not respond to the start or control request in a timely fashion". incidentally, i increased the windows service timeout from the default 30 seconds to 2 minutes via a registry hack (see http://support.microsoft.com/kb/824344, search for TimeoutPeriod in section 3), however the service start up still times out. my first question is - why might the "Local System Account" login takes SOOOOO MUCH LONGER than when the service logs in with the non-LocalSystemAccount, causing the windows service time-out? what's could the difference be between these two to cause such different behaviour at start up? secondly - taking a step back, all i'm trying to achieve, is simply a windows service that provides a gui for configuration - I'd be quite happy to run using the non-Local System Account (with named user/pwd), if I could get the service to interact with the desktop (that is, have a context menu available from the system tray). is this possible, and if so how? any pointers to the above questions would be very much appreciated! thanks in advance for your help.

    Read the article

  • What are the useful UNIX functions that MS doesn't implement? And why? [closed]

    - by prosseek
    When programming with Python, I came across some functions that are not implemented on Windows. os.fork() may be one of them. UNIX came before WinNT, so the WinNT developers (most notably Dave Cutler) must knew about the features and functions of the UNIX. But, to me, it seems that MS didn't like UNIX so much that they mistakenly/intentionally skipped or distorted some of the useful UNIX functions/features; i.e. /abc/def in UNIX, \abc\def in Windows as an easy example. And when I read the Windows System Programming book, I felt uncomfortable as the Windows system functions seem nothing more than a tweak from UNIX. (I might be wrong.) What are those functions/features that MS OSes don't have, but UNIX origninated? Is there any reason for this? Do they just want to differentiate from UNIX world? Or do they think some of the UNIX functions are unnecessary? Is Windows a tweak from UNIX? Or, is there any great OS features that were invented in MS to make Windows better than UNIX?

    Read the article

  • Need a way to determine if a file is done being written to.

    - by Khorkrak
    The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable. Ideally Works on either Linux or Windows - meaning the solution is OS independent. Works for any type of file. Good enough: Works just on windows but can be done through some library or whatever that can be accessed with Python. Works only for PDF files. Current best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.

    Read the article

  • Roaming user profile issues on Server 2008

    - by Alicia White
    I thought I cleared a user's profile from 2008, but it keeps coming back. So, I was looking for the best way to clear a roaming profile in Server 2008, but I have been unable to find anything. But, I did see the post here: http://serverfault.com/questions/18724/user-profile-keeps-loading-temp-profile I wanted to add a comment to that post, but it was closed as not being related to sysadmin. But, I think it IS related because I dealt with precisely this same problem on our Wndows 2008 terminal server. Here was the issue: we have a user who was getting an "unable to load your roaming profile" type of error at logon in Windows 2008. Looking at the server, we could see her temp profile listed in the profile list while she was loggged (listed as a "temporary" and not a "roaming" profile). While she was logged on, a folder called C:\Users\Temp.DOMAIN existed in the users folder, but that disappeared as soon as she logged out. When this thing happened in 2003, we would clear the contents of the roaming profile folder & delete the temp folder in C:\Documents and Settings. The thing is, 2008 behaves a bit differently. Server 2008 created a new roaming profile folder in the roaming profile folder share: \SERVER\ProfileShare\UserName.V2 The local profile disappears from the profile list in System Properties, so there is no profile to clear Also the local profile folder, C:\Users\Temp.DOMAIN doesn't stay on the server when the user logs out, so we can't delete that as we would normally do when this sort of thing happens in Windows 2003 Despite all of this, every time the user logs back on, the frickin' Temp profile always comes back. One of my team-mates, who is much more experienced with 2008, said I should check the registry for the user's profile in this key (the users are listed by SID): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList I saw the user's SID listed there, but it ended in .BAK. I checked several other servers where she is having the same profile errors: in all cases, her SID ended with .BAK. For example (xxx replacing the LONG SID): HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx.bak On the server she was logged on to, there were two keys for her profile in the registry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx and HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-xxxxx-xxxx.bak So, here is how I cleared up the issue. I had the user log off. I deleted the apparently bad profiles ending in .BAK from the ProfileList key on each server where it appeared. I made sure her roaming profile folder was empty I made sure that all the TEMP profile folders were gone The user logged back on: no more profile errors! Anyway, I wanted to make a comment on that closed question, but I didn't see any way to re-open the question so I could add it. But, I also would like to know if this is the best practice to clear out a bad roaming profile for Server 2008? I'm having a hard time finding any instructions on line on how best to do this, but this method I used seemed to work. I'd like to find some documentation to give to our Level 1 support staff so they will know how to clear user profiles on 2008 since this seems to be more involved that clearing user profiles in server 2003. Thanks, Alicia

    Read the article

  • Windows 7 remains powered on when restarting

    - by BombDefused
    I'm running windows 7 x64 on an MSI P67A-GD53 motherboard, in an Antec P280 Super Midi Towercase with a Corsair 650w PSU. I've just installed a second instance of windows 7 x64 on a separate disk (this is to keep my games separate from my work OS). The problem is that it appears now that I cannot restart from either instance of Windows 7. The shut down command, and sleep commands work as expected. When I try to restart, the shutdown happens but the system never reboots. Everything remains powered on, until I hold down the power button to force the power off. Ithink (but am not 100% sure) this has only started since I installed the second OS, and am assuming this has something to do with the motherboard needing to know which OS to run up again? Some other forums I've read suggest that the PSU has a major role in restart and could be at fault. Changing the boot order of the disks in the BIOS does not change anything. Any suggestions greatfully recieved! Update: I now have a reproduceable issue: I think the secondary OS install may have been a red herring. It was when windows tried to reboot during the install that I noticed the issue. After playing around with installing drivers, and rebooting many many times, I have found that it is the OC genie setting on the MSI motherboard that seems to trigger the problem. This makes sense as I only started using the OC genie feature a couple of weeks ago, and probably hadn't used restart in that time. However... simply turning off OC genie does not make the issue go away. I have to turn off OC genie, shutdown, start enter bios, go to the "Save and Exit" menu "Restore Defaults" yes to "Load optimized defaults", which will reset to clear the problem. Now when the PC boots into windows, I can restart as normal (and from the OS on either HDD). I only know how to control the issue, and don't still know the root cause. I'd like to be able to use the OC genie function if anyone can suggest a why I'm seeing this problem. Could it be that I'm drawing too much power when using OC feature?

    Read the article

  • BSOD & System Failure after trying to install a new RAM

    - by Praveen Kumar
    I have updated the question with sections, so that people won't find it difficult to read. Basic System Information Let me give a basic introduction on my system. I have a system of following configuration: Processor: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz RAM: Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9) x 2 OS: Windows 7 Ultimate, SP1 Build 7601 HDD: 1 TB Seagate 7200 RPM The Problem It was working fine for about an year. Yesterday I planned to increase my RAM to 16 GB by putting another set of two Corsair Vengeance - 4GB Single Module DDR3 Memory Kit (CMZ4GX3M1A1600C9). I got it from an authorized reseller and also, the RAM was fitted by a service engineer only. After the RAM was fit (all the four), the system failed to start, with an error code of 0x000000f4. The complete information of it is: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 16393 Additional information about the problem: BCCode: f4 BCP1: 0000000000000003 BCP2: FFFFFA8008A39060 BCP3: FFFFFA8008A39340 BCP4: FFFFF800037C8510 OS Version: 6_1_7601 Service Pack: 1_0 Product: 256_1 Files that help describe the problem: C:\Windows\Minidump\093012-13041-01.dmp C:\Users\Praveen Kumar\AppData\Local\Temp\WER-30716-0.sysdata.xml Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Another Problem We first thought that it was the RAM, which caused the issue. So I returned the RAMs and now my computer configuration is exactly how it was the previous day. But, following the removal of the RAM, I also had several crashes after that. One suspicious thing was with an error code c0000134: STOP: c0000135 The program can’t start because %hs is missing from your computer . Try resintalling the program to fix this problem. After reading contents from this, this and this, which were never my case, they didn't help me. But I didn't receive any more STOP c0000134 messages. But this 0x000000f4 keeps on coming. I am writing from the same system and it allows me to work for say, half an hour max. Then I hear a device disconnect sound, the one you hear in Windows 7, when a USB Mass Storage Device is plugged out. Immediately following that, my screen goes blank and I get 0x000000f4 blue screen. Okay, now I am really concerned about my Hard Disk data, but I have no clue if there is a problem with the HDD. My Question What all files do I need to submit for your reference? Can this issue be fixed? I am getting more time if I remove my RAM, clean it and then put it back. Weird! Hope I have given the necessary information to help you guys. Thanks in advance. Minidumps I have uploaded all the Minidump DMP files from C:\Windows\Minidump folder here: http://www.praveen-kumar.com/Minidumps.zip Let me know if you face any issues in accessing it. Will be able to share elsewhere. Updates 30-Sep-2012 10:15 AM IST: When I keep the system cover opened, pressed the HDD Cable well, it is allowing me to be on for about half an hour, I guess? Also, I feel that the CPU fan speed is kind of slow. It rotates at around 900 RPM, but the CPU Temperature is not more than 70° C. 30-Sep-2012 10:30 AM IST: My Modem (Beetel 220BX ADSL2+ Router) failed. I have no idea how it is related to this issue, but I thought that I need to document this too. I really have a bad day here. 30-Sep-2012 11:00 AM IST: System still running fine, with the cabinet cover open, now for about an hour. 30-Sep-2012 12:00 PM IST: I shut down the system and closed the cabinet. Started the system, and it hung after giving the password. After a few minutes, got the same 0x000000f4 error. So, while it is in the upright position, fixed the Hard Disk cable and now it is booting fine. Waiting for more observations and answers.

    Read the article

  • BSOD Code 16, artifacts all over the place (gtx 260)

    - by belinea
    I have a following, quite dated rig E8400 Core 2 Duo cpu Intel Dragontail Peak DP35DP motherboard on Intel Bearlake P35 chipset 4GB ram Geforce 260gtx Corsair 650W PSU Windows 7 64 bit The following things has happened in the last few days. I first decided to update my Nvidia drivers to the latest version. That was 4 days ago. PC worked fine for 2 days and I was able to play few games as well without any problems. Then 2 days ago a first crash happened while playing the new XCOM game. BSOD code 16. Just the blue screen, no artifacts. PC rebooted and worked well again, I continued playing this game for another 2 hours and went to sleep. Next evening I tried to play some BF3 multiplayer (use to play on LOW settings). Approx. 10 minutes into the game red/pink-ish artifacts appeared on the screen and game quit to desktop. Restarted the game and another 3-4 minutes afterwards another crash to desktop but this time followed shortly by BSOD Code 16. From that moment I started to seeing artifacts on random startups, including Windows loading screen and the BIOS itself. Windows would still load but soon enough it would BSOD on a simple task like opening a Internet browser. Today I get tons of artifacts (little small red dashes all over the screen) on BIOS, loading screen, normal Windows mode as well as safe mode. I suspect it wouldn't be drivers but I tried removing and sweeping them entirely in the safe mode. PC would still start with artifacts all over the place but would load the normal mode, just without the driver, in the default lowest resolution. As soon as proper Nvidia drivers are installed though and PC rebooted, Windows doesn't load at all as BSOD now appears on loading screen. However, again, if I go to safe mode and remove drivers, normal mode launches fine. So obviously crash happens only on high resolution. I opened my machine this morning and gave it a proper cleaning even though it wasn't heavily dusted. It didn't help and number of artifacts seems to increase with every PC restart. I write these words in safe mode, which works, but I have to look through all the red dots and dashes. I don't have built in GPU chipset so I can't try removing my Geforce card nor can I borrow A GPU from anyone else. What are my options? I was looking into getting a completely new rig around Christmas so I'm not freaking out about this. If everything points to hardware issue I may simply decide to get the new machine earlier and don't bother with fixing this one. However it would be great to learn more if this is indeed situation that has slim chances of getting sorted. I realize BSOD Code 16 is rather popular topic online but every story seems a bit different and there can be number of issues with it. Hence a new thread.

    Read the article

  • Installing XP through USB-flash disc

    - by Crazy Buddy
    I don't know whether this could be asked here... So, Pardon me for this. Probably, this is based on My laptop and a contradiction to this question asked already here... I tried to format my "government-provided" laptop (No CD-drive). I thought those IT guys are proving that they're too smart..! I have the Windows XP CD right now. I didn't like to stick with some home-made OS from our Government. So, I used another laptop to format the govt. thing and tried to install XP (As I didn't have enough bills to invest on Windows 7 or 8). Case 1: First, I allowed WinSetupFromUSB 1.0 beta 8 to deal with the flash disk. I wondered for the first time that XP text-screen appeared. Using the first part, I formatted my laptop. It started to copy files, entered into the next part, and completed the installation. I started my PC for the first time. XP splash screen appeared. Suddenly, a blue screen flashed and disappeared (I can't even read what it says). Rebooted and arrived at the screen, "Start Windows Normally". It happens and happens still - like an infinite loop :-) Case 2: Next, I used Rufus 1.2.0 to transfer files to my Flash and it screwed everything out. Even if I used Flash to boot, it arrives to the same screen "Start Windows normally". It doesn't show any response of Flash being inserted. Then I recognized that, It's simply copies everything to the flash disk. Case 3: Then, I started with Novicorp WinToFlash (giving utmost priority to this site). I booted with the disk. I entered into the first part - "Text mode". Some lines started running like that "Press F6 if you..." like that. The last thing I saw was, "Setup is starting Windows..." Suddenly a blue screen appeared like this captured one. I've a suspicion that the same screen appears again & again in first case. Man, I'm dead. Case 4: For the sake of my last hope, I used WinSetupFromUSB 0.1.1. I was shocked on arriving at a screen which says something "GRUB4DOS" like that and some commands like {command line, reboot, halt, \find menu.lst} and when I go inside those "find" options, I see "Error:15 - File not found". Googling provided some commands to mount SETUPLDR.BIN file in the "grub" thing which also proved unsuccessful... Some sites say that Factory reset uses only some function keys. A guy said that it's F11 for lenovo. Screw him. It's all a waste-of-time. But, I think SE would help me out. Is our government IT guys doin' this to me? Are they Soooo smart to spark some blue screen in front of me to freak me out? Any suggestions or new (useful) USB transferring things would be appreciated. It's very urgent. So, It'd be better if you guys pay some attention in debugging and help me out..? Thanks for your time guys :-)

    Read the article

  • Book Review: Inside Windows Communicat?ion Foundation by Justin Smith

    - by Sam Abraham
    In gearing up for a new major project, I have taken it upon myself to research and review various aspects of our Microsoft stack of choice seeking new creative ways for us to leverage in our upcoming state-of-the-art solution projected to position us ahead of the competition. While I am a big supporter of search engines and online articles as a quick and usually reliable source of information, I have opted in my investigative quest to actually “hit the books”.  I have also made it a habit to provide quick reviews for material I go over hoping this can be of help to someone who may be looking for items others may have had success using for reference. I have started a few months ago by investigating better ways to implementing, profiling and troubleshooting SQL Server 2008. My reference of choice was Itzik Ben-Gan et al’s “Inside Microsoft SQL Server 2008” series. While it has been a month since my last book review, this by no means meant that I have been sitting idle. It has been pretty challenging to balance research with the continuous flow of projects and deadlines all while balancing that with my family duties which, of course, always comes first. In this post, I will be providing a quick review of my latest reading: Inside Windows Communication Foundation by Justin Smith. This book has been on my reading list for a very long time and I am proud to have finally tackled it. Justin’s book presents a great coverage of WCF internals. His simple, concise and well-worded style has simplified the relatively complex internals of WCF and made it comprehensible. Justin opted to organize the book into three parts: an introduction to WCF, coverage of the Channel Layer and a look at WCF internals at the ServiceModel layer. Part I introduced the concepts and made the case behind WCF while covering a simplified version of WCF’s message patterns, endpoints and contracts. In Part II, Justin provided a thorough coverage of the internals of Messages, Channels and Channel Managers. Part III concluded this nice reading with coverage of Bindings, Contracts, Dispatchers and Clients. While one would not likely need to extend WCF at that low level of the API, an understanding of the inner-workings of WCF is a must to avoid pitfalls mainly caused by misinformation or erroneous assumptions. Problems can quickly arise in high-traffic hosted solutions, but most can be easily avoided with some minimal time investment and education. My next goal is to pay a closer look at WCF from the programmer’s API perspective now that I have acquired a better understanding of its inner working.   Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group.   Stay tuned for more… All the best, --Sam

    Read the article

  • Windows CE: Changing Static IP Address

    - by Bruce Eitman
    A customer contacted me recently and asked me how to change a static IP address at runtime.  Of course this is not something that I know how to do, but with a little bit of research I figure out how to do it. It turns out that the challenge is to request that the adapter update itself with the new IP Address.  Otherwise, the change in IP address is a matter of changing the address in the registry for the adapter.   The registry entry is something like: [HKEY_LOCAL_MACHINE\Comm\LAN90001\Parms\TcpIp]    "EnableDHCP"=dword:0    "IpAddress"="192.168.0.100"     "DefaultGateway"="192.168.0.1"    "Subnetmask"="255.255.255.0" Where LAN90001 would be replace with your adapter name.  I have written quite a few articles about how to modify the registry, including a registry editor that you could use. Requesting that the adapter update itself is a matter of getting a handle to the NDIS driver, and then asking it to refresh the adapter.  The code is: #include <windows.h> #include "winioctl.h" #include "ntddndis.h"   void RebindAdapter( TCHAR *adaptername ) {       HANDLE hNdis;       BOOL fResult = FALSE;       int count;         // Make this function easier to use - hide the need to have two null characters.       int length = wcslen(adaptername);       int AdapterSize = (length + 2) * sizeof( TCHAR );       TCHAR *Adapter = malloc(AdapterSize);       wcscpy( Adapter, adaptername );       Adapter[ length ] = '\0';       Adapter[ length +1 ] = '\0';           hNdis = CreateFile(DD_NDIS_DEVICE_NAME,                   GENERIC_READ | GENERIC_WRITE,                   FILE_SHARE_READ | FILE_SHARE_WRITE,                   NULL,                   OPEN_ALWAYS,                   0,                   NULL);         if (INVALID_HANDLE_VALUE != hNdis)       {             fResult = DeviceIoControl(hNdis,                         IOCTL_NDIS_REBIND_ADAPTER,                         Adapter,                         AdapterSize,                         NULL,                         0,                         &count,                         NULL);             if( !fResult )             {                   RETAILMSG( 1, (TEXT("DeviceIoControl failed %d\n"), GetLastError() ));             }             CloseHandle(hNdis);       }       else       {             RETAILMSG( 1, (TEXT("Failed to open NDIS Handle\n")));       }   }       int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR    lpCmdLine, int       nCmdShow) {     RebindAdapter( TEXT("LAN90001") );     return 0; }   If you don’t want to write any code, but instead plan to use a registry editor to change the IP Address, then there is a command line utility to do the same thing.  NDISConfig.exe can be used: Ndisconfig adapter rebind LAN90001    Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

  • Steps to deploying on Windows Azure

    - by Vincent Grondin
    Alright, these steps might be a little detailed and of few might not be necessary but still it's a pretty accurate road map to deploying on azure...     1)     Open you solution 2)      Rebuild ALL 3)      Right click on your Azure project and click "Publish" 4)      It should open a windows explorer window with your package to be uploaded (.cspkg ) and its associated configuration (.cscfg) to be uploaded too.  Keep it open, you'll need that path later on... 5)      It should also open a browser asking you to login to your passport account, please do so. 6)      After this you will be redirected to the Azure Portal where you will see your Azure Project Name below the « Projet Name » section.  Click on it. 7)      Then you should be redirected to a detailed view of your account on Azure where you will create a new service by clicking the hyperlink on the top right corner. 8)      Choose the right service type for you, most likely the "Hosted Service" type 9)      Choose a « Label » name and click « next » 10)   Choose a name for your service and validate that the name is available in the cloud by clicking the "Check Availability" button 11)   At the bottom of this same page, you can choose to create a group for your service, use no group or join an existing group.  Creating a group means that all applications that belong to the same group will see no cost to exchanging data between other applications of the same group.  Most of the time when you create a single application, creating a group is not necessary.  You should choose a region that's close to your own region. 12)   On the next window, you should see a "Production" environment and a "Staging" environment.  Beware because "Staging" and "Production" are two different environments in the cloud and applications in "Staging" even when not runing do continue to rack in charges...  Choose an environment and click "Deploy". 13)   In the following window, browse to the path where your cspkg resides and then do the same thing with your cscfg file.  Choose a name for your Label,  and click "Deploy"... 14)   From now on, the clock is ticking and unless you have free Azure hours, your credit card is being billed… 15)   Click on the « Run » button to start your application 16)   Be patient.... be very patient… 17)   Once your application has finished starting, you should see a GREEN circle on the left side of the screen indicating that your application is READY.  Click the URL to test your application and remember that if your application is a service, you have to hit the "svc" class behind the link you see there.  Something in the likes of http://testvince2.cloudapp.net/service1.svc  (this is a fictional link) 18)   Hopefully your application will show up or in the case of a service, you will see your service's wsdl meaning that everything is working fine. Happy cloud computing all!

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • OAuth with RestSharp in Windows Phone

    - by midoBB
    Nearly every major API provider uses OAuth for the user authentication and while it is easy to understand the concept, using it in a Windows Phone app isn’t pretty straightforward. So for this quick tutorial we will be using RestSharp for WP7 and the API from getglue.com (an entertainment site) to authorize the user. So the first step is to get the OAuth request token and then we redirect our browserControl to the authorization URL private void StartLogin() {   var client = new RestClient("https://api.getglue.com/"); client.Authenticator = OAuth1Authenticator.ForRequestToken("ConsumerKey", "ConsumerSecret"); var request = new RestRequest("oauth/request_token"); client.ExecuteAsync(request, response => { _oAuthToken = GetQueryParameter(response.Content, "oauth_token"); _oAuthTokenSecret = GetQueryParameter(response.Content, "oauth_token_secret"); string authorizeUrl = "http://getglue.com/oauth/authorize" + "?oauth_token=" + _oAuthToken + "&style=mobile"; Dispatcher.BeginInvoke(() => { browserControl.Navigate(new Uri(authorizeUrl)); }); }); } private static string GetQueryParameter(string input, string parameterName) { foreach (string item in input.Split('&')) { var parts = item.Split('='); if (parts[0] == parameterName) { return parts[1]; } } return String.Empty; } Then we listen to the browser’s Navigating Event private void Navigating(Microsoft.Phone.Controls.NavigatingEventArgs e) { if (e.Uri.AbsoluteUri.Contains("oauth_callback")) { var arguments = e.Uri.AbsoluteUri.Split('?'); if (arguments.Length < 1) return; GetAccessToken(arguments[1]); } } private void GetAccessToken(string uri) { var requestToken = GetQueryParameter(uri, "oauth_token"); var client = new RestClient("https://api.getglue.com/"); client.Authenticator = OAuth1Authenticator.ForAccessToken(ConsumerKey, ConsumerSecret, _oAuthToken, _oAuthTokenSecret); var request = new RestRequest("oauth/access_token"); client.ExecuteAsync(request, response => { AccessToken = GetQueryParameter(response.Content, "oauth_token"); AccessTokenSecret = GetQueryParameter(response.Content, "oauth_token_secret"); UserId = GetQueryParameter(response.Content, "glue_userId"); }); } Now to test it we can access the user’s friends list var client = new RestClient("http://api.getglue.com/v2"); client.Authenticator = OAuth1Authenticator.ForProtectedResource(ConsumerKey, ConsumerSecret, GAccessToken, AccessTokenSecret); var request = new RestRequest("/user/friends"); request.AddParameter("userId", UserId,ParameterType.GetOrPost); // request.AddParameter("category", "all",ParameterType.GetOrPost); client.ExecuteAsync(request, response => { TreatFreindsList(); }); And that’s it now we can access all OAuth methods using RestSharp.

    Read the article

  • How do you turn on the customizable gnome-panel features (like gnome-applets) in Precise?

    - by chriv
    I resurrected a broken laptop today. I took out the HDD, put it in a USB 3.0 enclosure, and created a VM that would use it. It was running lucid. I took a screenshot of the desktop before I started "do-release-upgrade", because from experience, I will never have my GUI back the way I want it again. I know how to install gnome-panel to get back the "Gnome Classic" session option. I know how to put my minimize, maximize, and close buttons back in the upper-right hand corner of windows (where they belong). I know how to use gdm instead of lightdm. Unity gets worse in every version (and the other desktop OS is going to be even worse with Metro). Here's what I don't know (in order of importance): 1. How do you make the panels in gnome (gnome-panel, to be precise) customizable again (like they were in older versions of Ubuntu)? 2. How do you install applets in the panels now (right-click is now ignored)? 3. How can you customize all of the window elements (like you could in older versions of Ubuntu)? I can't remember much about maverick, natty, or oneiric (except their names), so I don't know exactly when I lost these capabilities. Edit: (no screenshot), my StackExchange reputation (on other StackExchange sites) doesn't carry over to this site, so I can't post the screenshot. Take a look at the panels in the screen hot. They are nice, compact, and VERY functional (disk mounter applet, frequently used shortcuts, workspaces, show desktop, kill window, and trash icons, etc.) Notice how small the fonts (and how little real estate they waste). You can't notice the compact title bars, fonts, and window icons in this screen shot (since I redacted the rest of the desktop), but it's the same story there. Please help. I don't want to learn another distro, but Ubuntu gets less customizable with every "upgrade." Screenshot (not an inline image, since I don't have the reputation yet)... i.stack.imgur.com/puoUT.png

    Read the article

< Previous Page | 710 711 712 713 714 715 716 717 718 719 720 721  | Next Page >