Search Results

Search found 10841 results on 434 pages for 'air native extension'.

Page 104/434 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Easily Close All Tabs in Google Chrome

    - by Asian Angel
    Do you find yourself with a lot of tabs open but dread closing all but one manually? Now you can close all of your tabs with a single click, and have just one ready to go with the Close all Tabs extension. Before We all find ourselves with a lot of tabs open sooner or later. That is not so bad until we realize that we need to close all of them and get back to work. A person could open a new tab and manually close the rest or close the entire window and restart Chrome. But a single click solution would be a lot more convenient. After There it is…the single click solution. Just click the Toolbar Button and BOOM! One fresh window with a single new tab page showing. Now if you could only take the rest of the day off… Conclusion The Close all Tabs extension may not be something that everyone would use, but if you are tired of manually closing all of those tabs then you will definitely like it. Links Download the Close all Tabs extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Focused New Tabs Quick-Fix for Google ChromeVisually Browse Through Your Open Tabs in Google ChromeMake Google Chrome Open with Pinned TabsStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeEasily Control a Large Amount of Tabs in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • Moodle 2 pages loading up to 2000% faster

    - by TJ
    On average our Moodle 2 pages were loading in 2.8 seconds, now they load in as little as 0.12 seconds, so that’s like 2333.333% faster!What was it I hear you say?Well it was the database connection, or more correctly the database library. I was using FreeTDS http://docs.moodle.org/22/en/Installing_MSSQL_for_PHP, but now I’m using the new Microsoft Drivers 3.0 for PHP for SQL Server http://www.microsoft.com/en-us/download/details.aspx?id=20098. I’m in a Windows Server IT department, and in both our live and development environments, we have Moodle 2.2.3, IIS 7.5, and PHP 5.3.10 running on two Windows Server 2008 R2 servers and using MS Network Load Balancing.Since moving to Moodle 2, the pages have always loaded much more slowly than they did in Moodle 1.9, I’ve been chasing this issue for quite a while. I had previously tried the Microsoft Drivers for PHP for SQL Server 2.0, but my testing showed it was slower than the FreeTDS driver.Then yesterday I found Microsoft had released the new version, Microsoft Drivers 3.0 for PHP for SQL Server, so I thought I’d give it a run, and wow what a difference it made.Pages that were loading in 2.8 seconds, now they load in as little as 0.12 seconds, 2333.333% faster…I have more testing to do, but so far it’s looking good, I have scheduled multi user load testing for early next week (fingers crossed).To make the change all I need to do was,download the driverscopy the relevant files to PHP\ext (for us they were php_pdo_sqlsrv_53_nts.dll and php_sqlsrv_53_nts.dll) install the Microsoft SQL Server 2012 Native Client x64 http://www.microsoft.com/en-us/download/details.aspx?id=29065 add to PHP.ini, extension=php_pdo_sqlsrv_53_nts.dll, extension=php_sqlsrv_53_nts.dllremove form PHP.ini, extension=php_dblib.dllvchange in PHP.ini, mssql.textlimit = 20971520 and mssql.textsize = 20971520change Moodle config.php, $CFG->dbtype = 'sqlsrv'; and 'dbpersist' => Trueand then reboot and test…I've browsed courses, backed up/restored some really large and complicated courses, deleted courses etc. etc. all good.Still more testing to do but, hey this is good start...Hope this helps anyone experiencing the same slowness…

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • Windows 7+ desktop apps - what's the best UI toolkit for a new project?

    - by Chris Adams
    I'm trying to make a decision for a new Windows desktop app: what to use for the UI. (This is a desktop app that needs to have compatibility with Windows 7. It won't be distributed on the Windows Store.) This application is going to be cross-platform. I intend on writing the core in C++, and using each platform's native UI toolkit. I feel this is preferable to using a cross-platform toolkit like Qt, as it allows me to keep the native look and feel of each platform. On the Windows side, the UI situation isn't exactly clear. I'm getting the feeling that Microsoft is slowly abandoning .NET, particularly as their preferred toolkit for desktop apps. Indeed, the Getting Started chapter for Windows 7, as well as the rest of Microsoft's documentation, seems to be more suited for C++. I have a few options here: C# with WPF - This sesms like this might be the best Microsoft has to offer for Windows 7 desktop apps, even if it isn't their "preferred" toolkit. I'd need to use P/Invoke to call my C++ code. C++ with Direct2D - This is what Microsoft used in one of their examples. This feels like it's too low-level. Part of the appeal of a higher-level UI toolkit is the consistency with the native look and feel of the platform, so doing this would just feel strange. C++ with a third-party UI toolkit, like Qt There might be some other options I'm missing, which I'd love to hear about. So, if you were starting a new Windows 7+ desktop app today, what would you use?

    Read the article

  • Issue While uploading a image to share point 2010 picture library.

    - by Gino Abraham
    I was trying to upload a image to my picture library using sharepoint client object model. I Used the code from the below blog to upload a file to my picture library. http://blogs.msdn.com/b/sridhara/archive/2010/03/12/uploading-files-using-client-object-model-in-sharepoint-2010.aspx The image got uploaded sucessfully. But when we took the relative url to update in a different list, we were getting empty image symbol. After a lot of analysis we figured out that the issue was with the file we uploaded. An image file which is of jpeg quality was uploaded to an application with giff extension. Try this. Copy a JPG file from net and save it to your file system. Change the extension of the file from jpg to giff. When you change the file extension the image quality remains same but it will open in picture viewers. Upload the file to your picture library. Once uploaded you will get the file listed as thumbail in your picture library. Click on the thumbnail image it will open up a page showing a larger image with file details. Now click either on the image or the file name hyper link, it will open up an empty page with default no image symbol. I wasted a lot of time on this figuring out the issue, so thought of sharing here. Hope this helps some one.

    Read the article

  • How can I "diff" two files with Nautilus?

    - by bioShark
    I have installed Meld and found out it's a great comparing tool. Unfortunately there is no integration with Nautilus 3.2. This means, I can't right click on files and select an option to open them in Meld for comparison. I have seen in the tools comment that the tool need the diff-ext package to be installed. This package has been removed from Ubuntu universe, I am guessing because gtk 3.0. Even if I manually downloaded from source forge the diff-ext package, when I try to configure it the check fails with the message: checking for DIFF_EXT... configure: error: Package requirements (libnautilus-extension >= 2.14.0 gconf-2.0 >= 2.14.0 gnome-vfs-module-2.0 >= 2.14) were not met: No package 'libnautilus-extension' found No package 'gconf-2.0' found No package 'gnome-vfs-module-2.0' found Ok, so from this output I gather that indeed gtk 2 is being required to install the diff extension to nautilus. Now, my question is: Is there a possibility to integrate Meld into Nautilus? Or, are there any other diff based tool which integrate with current Nautilus? So gtk3 based. I am using Ubuntu 11.10 if there was any doubt so far. cheers and thanks in advance.

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • JRuby and JVM Languages at JavaOne!

    - by Yolande Poirier
    "My goal with my talks at JavaOne is to teach what is happening at the JVM level and below so people understand better where we are going" explains Charles Nutter, Jruby project lead. In this interview, Charles shared the JRuby features he presented at the JVM Language Summit. They include foreign function interface (FFI), IO layer, character transcoding, regular expressions, compilers, coroutines, and more.  At JavaOne, he will be presenting:  Going Native: Bringing FFI to the JVM The Java Native Runtime (JNR) is a high-speed foreign function interface (FFI) for calling native code from Java without ever writing a line of C. Based on the success of JNR, JDK Enhancement Proposal (JEP) 191 will bring FFI to OpenJDK as an internal API.  The Emerging Languages Bowl: The Big League Challenge In this panel discussion, these emerging languages are portrayed by their respective champions, who explain how they may help your everyday life as a Java developer. Script Bowl 2014: The Battle Rages On In this contest, languages that run on the JVM, represented by their respective language experts, battle for most popular language status by showing off their new features. Audience members will also vote on a language that should not return in 2015. Returning from 2013 are language gurus representing Clojure, Groovy, JRuby, and Scala.

    Read the article

  • UNESCO, J-ISIS, and the JavaFX 2.2 WebView

    - by Geertjan
    J-ISIS, which is the newly developed Java version of the UNESCO generalized information storage and retrieval system for bibliographic information, continues to be under heavy development and code refactoring in its open source repository. Read more about J-ISIS and its NetBeans Platform basis here. Soon a new version will be available for testing and it would be cool to see the application in action at that time. Currently, it looks as follows, though note that the menu bar is under development and many menus you see there will be replaced or removed soon: About one aspect of the application, the browser, which you can see above, Jean-Claude Dauphin, its project lead, wrote me the following: The DJ-Native Swing JWebBrowser has been a nice solution for getting a Java Web Browser for most popular platforms. But the Java integration has always produced from time to time some strange behavior (like losing the focus on the other components after clicking on the Browser window, overlapping of windows, etc.), most probably because of mixing heavyweight and lightweight components and also because of our incompetency in solving the issues. Thus, recently we changed for the JavaFX 2.2 WebWiew. The integration with Java is fine and we have got rid of all the DJ-Native Swing problems. However, we have lost some features which were given for free with the native browsers such as downloading resources in different formats and opening them in the right application. This is a pretty cool step forward, i.e., the JavaFX integration. It also confirms for me something I've heard other people saying too: the JavaFX WebView component is a perfect low threshold entry point for Swing developers feeling their way into the world of JavaFX.

    Read the article

  • Sandcastle Help File Builder - October 2010 release

    - by TATWORTH
    At http://shfb.codeplex.com/releases/view/92191, the latest Sandcastle has been released. I am pleased to say that it incorporates the generic version of a fix, I originated that allows projects including Crystal Reports to be documented.Here is the relevant passage from the help file:"The default configuration for MRefBuilder has been updated to ignore the Crystal Reports licensing assembly (BusinessObjects.Licensing.KeycodeDecoder) if it fails to get resolved. This assembly does not appear to be available and ignoring it prevents projects that include Crystal Reports assemblies from failing and being unbuildable."There are many other fixes. Here are the release notes:IMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog.This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release supports Visual Studio 2010 and 2012 solutions and projects as documentation sources, and adds support for projects targeting the .NET 4.5 Framework, .NET Portable Library 4.5, and .NET for Windows Store Apps.See the Sandcastle 2.7.1.0 Release Notes for details on all of the changes made to the underlying Sandcastle tools and presentation styles.This release uses the Sandcastle Guided Installation package. Download and extract to a folder and then runSandcastleInstaller.exe to run the guided installation of Sandcastle, the various extra items, the Sandcastle Help File Builder core components, and the Visual Studio extension package.What's IncludedHelp 1 compiler check and instructions on where to download it and how to install it if neededHelp 2 compiler check and instructions on where to download it and how to install it if neededSandcastle October 2012 2.7.1.0An option to install the MAML schemas in Visual Studio to provide IntelliSense for MAML topicsSandcastle Help File Builder 1.9.5.0SHFB Visual Studio Extension PackageFor more information about the Visual Studio extension package, see the Visual Studio Package help file topic.

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • Making WatiN Wait for JQuery document.Ready() Functions to Complete

    - by Steve Wilkes
    WatiN's DomContainer.WaitForComplete() method pauses test execution until the DOM has finished loading, but if your page has functions registered with JQuery's ready() function, you'll probably want to wait for those to finish executing before testing it. Here's a WatiN extension method which pauses test execution until that happens. JQuery (as far as I can see) doesn't provide an event or other way of being notified of when it's finished running your ready() functions, so you have to get around it another way. Luckily, because ready() executes the functions it's given in the order they're registered, you can simply register another one to add a 'marker' div to the page, and tell WatiN to wait for that div to exist. Here's the code; I added the extension method to Browser rather than DomContainer (Browser derives from DomContainer) because it's the sort of thing you only execute once for each of the pages your test loads, so Browser seemed like a good place to put it. public static void WaitForJQueryDocumentReadyFunctionsToComplete(this Browser browser) { // Don't try this is JQuery isn't defined on the page: if (bool.Parse(browser.Eval("typeof $ == 'function'"))) { const string jqueryCompleteId = "jquery-document-ready-functions-complete"; // Register a ready() function which adds a marker div to the body: browser.Eval( @"$(document).ready(function() { " + "$('body').append('<div id=""" + jqueryCompleteId + @""" />'); " + "});"); // Wait for the marker div to exist or make the test fail: browser.Div(Find.ById(jqueryCompleteId)) .WaitUntilExistsOrFail(10, "JQuery document ready functions did not complete."); } } The code uses the Eval() method to send JavaScript to the browser to be executed; first to check that JQuery actually exists on the page, then to add the new ready() method. WaitUntilExistsOrFail() is another WatiN extension method I've written (I've ended up writing really quite a lot of them) which waits for the element on which it is invoked to exist, and uses Assert.Fail() to fail the test with the given message if it doesn't exist within the specified number of seconds. Here it is: public static void WaitUntilExistsOrFail(this Element element, int timeoutInSeconds, string failureMessage) { try { element.WaitUntilExists(timeoutInSeconds); } catch (WatinTimeoutException) { Assert.Fail(failureMessage); } }

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • Community TFS Build Manager available for Visual Studio 2012 RC

    - by Jakob Ehn
    I finally got around to push out a version of the Community TFS Build Manager that is compatible with Visual Studio 2012 RC. Unfortunately I had to do this as a separate extension, it references different versions of the TFS assemblies and also some properties and methods that the 2010 version uses are now obsolete in the TFS 2012 API. To download it, just open the Extension Manager, select Online and search for TFS Build:   You can also download it from this link: http://visualstudiogallery.msdn.microsoft.com/cfdb84b4-285e-4eeb-9fa9-dad9bfe2cd10 The functionality is identical to the 2010 version, the only difference is that you can’t start it from the Team Explorer Builds node (since the TE has been completely rewritten and the extension API’s are not yet published). So, to start it you must use the Tools menu: We will continue shipping updates to both versions in the future, as long as it functionality that is compatible with both TFS 2010 and TFS 2012. You might also note that the color scheme used for the build manager doesn’t look as good with the VS2012 theme….   Hope you will enjoy the tool in Visual Studio 2012 as well. I want to thank all the people who have downloaded and used the 2010 version! For feedback, feature requests, bug reports please post this to the CodePlex site: http://tfsbuildextensions.codeplex.com

    Read the article

  • Mercurial extensions not working in Windows 7 x64?

    - by Samuel Meacham
    We are test driving Mercurial at work. We don't want to have to enter our user/pass each time we interact with a repository, so we set up the mercurial_keyring extension. We: Installed Python 2.6.5 (32 or 64 bit, depending on the system) Installed setuptools (for easy_install.exe) easy_install keyring easy_install mercurial_keyring And then made the appropriate changes to %userprofile%/mercurial.ini in the [auth] section. It works fine on my colleague's computer (32bit xp sp3), but it does not work on my machine (Windows 7 Ultimate x64). Also noteworthy, the setuptools had to be built from source on Win 7 x64 (python setup.py bdist_wininst, then run the resulting setuptools-0.6c11.win-amd64.exe). Using just hg.exe from the Mercurial 1.5 binary installation (the .msi), I get this error when I run hg.exe: * failed to import extension mercurial_keyring: No module named mercurial_keyring I tried to change my mercurial.ini, to specify the path to the mercurial_keyring.py file, instead of having mercurial find it (since it's in the PYTHONPATH). Old: [extensions] mercurial_keyring = New: [extensions] mercurial_keyring = c:/mercurial/extensions/mercurial_keyring.py The error changes to: abort: could not import module keyring! So while providing the path to the mercurial_keyring extension works, the dependent keyring module still cannot be found. After further investigation, it appears that NO extensions work. They all produce the error: * failed to import extension [extension name]: No module named [module name] It appears that when running hg.exe, it is not aware of PYTHONPATH. I have tried: Python 2.6.5 32 bit Python 2.6.5 64 bit Building Mercurial 1.5 from source with MinGW Building Mercurial 1.5 from source with MSVC9 Using hg.exe from the 1.5 binary dist (.msi) Using the hg.py in c:\python26\scripts when building from source Various configurations in %userprofile%/mercurial.ini Using setuptools (easy_install.exe) to install keyring and mercurial_keyring Building keyring and mercurial_keyring from source (python setup.py bdist_wininst) Nothing works. The closest I've got is using hg.py when building from source. It at least doesn't give me errors, and actually creates %userprofile%/wincrypto_pass.cfg when I enter my credentials. But on subsequent requests, it doesn't enter the credentials automatically. It prompts me for them again. Interestingly, TortoiseHG is using the keyring. I just can't get it to work on the command line. I think something is going on with Win 7 x64 that is preventing mercurial (hg.exe) from seeing the PYTHONPATH, so it can't find any of the installed modules. Does anyone have extensions working in Win 7 x64? Specifically with the binary installation of mercurial (not hg.py)?

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • What is the best way to return two values from a method?

    - by Edward Tanguay
    When I have to write methods which return two values, I usually go about it as in the following code which returns a List<string>. Or if I have to return e.g. a id and string, then I return a List<object> and then pick them out with index number and recast the values. This recasting and referencing by index seems inelegant so I want to develop a new habit for methods that return two values. What is the best pattern for this? using System; using System.Collections.Generic; using System.Linq; namespace MultipleReturns { class Program { static void Main(string[] args) { string extension = "txt"; { List<string> entries = GetIdCodeAndFileName("first.txt", extension); Console.WriteLine("{0}, {1}", entries[0], entries[1]); } { List<string> entries = GetIdCodeAndFileName("first", extension); Console.WriteLine("{0}, {1}", entries[0], entries[1]); } Console.ReadLine(); } /// <summary> /// gets "first.txt", "txt" and returns "first", "first.txt" /// gets "first", "txt" and returns "first", "first.txt" /// it is assumed that extensions will always match /// </summary> /// <param name="line"></param> public static List<string> GetIdCodeAndFileName(string line, string extension) { if (line.Contains(".")) { List<string> parts = line.BreakIntoParts("."); List<string> returnItems = new List<string>(); returnItems.Add(parts[0]); returnItems.Add(line); return returnItems; } else { List<string> returnItems = new List<string>(); returnItems.Add(line); returnItems.Add(line + "." + extension); return returnItems; } } } public static class StringHelpers { public static List<string> BreakIntoParts(this string line, string separator) { if (String.IsNullOrEmpty(line)) return null; else { return line.Split(new string[] { separator }, StringSplitOptions.None).Select(p => p.Trim()).ToList(); } } } }

    Read the article

  • MSMQ messages using HTTP just won't get delivered

    - by John Breakwell
    I'm starting off the blog with a discussion of an unusual problem that has hit a couple of my customers this month. It's not a problem you'd expect to bump into and the solution is potentially painful. Scenario You want to make use of the HTTP protocol to send MSMQ messages from one machine to another. You have installed HTTP support for MSMQ and have addressed your messages correctly but they will not leave the outgoing queue. There is no configuration for HTTP support - setup has already done all that for you (although you may want to check the most recent "Installation of the MSMQ HTTP Support Subcomponent" section of MSMQINST.LOG to see if anything DID go wrong) - so you can't tweak anything. Restarting services and servers makes no difference - the messages just will not get delivered. The problem is documented and resolved by Knowledgebase article 916699 "The message may not be delivered when you use the HTTP protocol to send a message to a server that is running Message Queuing 3.0". It is unlikely that you would be able to resolve the problem without the assistance of PSS because there are no messages that can be seen to assist you and only access to the source code exposes the root cause. As this communication is over HTTP, the IIS logs would be a good place to start. POST entries are logged which show that connectivity is working and message delivery is being attempted: #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2006-09-12 12:11:29 #Fields: date time s-sitename s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status 2006-09-12 12:12:12 W3SVC1 10.1.17.219 POST /msmq/private$/test - 80 - 10.2.200.3 - 200 0 0 If you capture the traffic with Network Monitor you can see the POST being sent to the server but you also see a response being returned to the client: HTTP: Response to Client; HTTP/1.1; Status Code = 500 - Internal Server Error "Internal Server Error" means we can probably stop looking at IIS and instead focus on the Message Queuing ISAPI extension (Mqise.dll). MSMQ 3.0 (Windows XP and Windows Server 2003) comes with error logging enabled by default but the log files are in binary format - MSMQ 2.0 generated logging in plain text. The symbolic information needed for formatting the files is not currently publicly available so log files have to be sent in to Microsoft PSS.  Although this does mean raising a support case, formatting the log files to text and returning them to the customer shouldn't take long. Obviously the engineer analyses them for you - I just want to point out that you can see the logging output in text format if you want it. The important entries in the log for this problem are: [7]b48.928 09/12/2006-13:20:44.552 [mqise GetNetBiosNameFromIPAddr] ERROR:Failed to get the NetBios name from the DNS name, error = 0xea [7]b48.928 09/12/2006-13:20:44.552 [mqise RPCToServer] ERROR:RPC call R_ProcessHTTPRequest failed, error code = 1702(RPC_S_INVALID_BINDING) which allow a Microsoft escalation engineer to check the MQISE source code to see what is going wrong. This problem according to the article occurs when the extension tries to bind to the local MSMQ service after the extension receives a POST request that contains an MSMQ message. MSMQ resolves the server name by using the DNS host name but the extension cannot bind to the service because the buffer that MSMQ uses to resolve the server name is too small - server names that are exactly 15 characters long will not fit. RPC exception 0x6a6 (RPC_S_INVALID_BINDING) occurs in the W3wp.exe process but the exception is handled and so you do not receive an error message. The workaround is to rename the MSMQ server to something less than 15 characters. If the problem has only just been noticed in a production environment - an application may have been modified to get through a newly-implemented firewall, for example - then renaming is going to be an issue. Other applications may need to be reinstalled or modified if server names are hard-coded or stored in the registry. The renaming may also break a company naming convention where the name is built up from something like location+department+number. If you want to learn more about MSMQ logging then check out Chapter 15 of the MSMQ FAQ. In fact, even if you DON'T want to learn anything about MSMQ logging you should read the FAQ anyway as there is a huge amount of useful information on known issues and the like.

    Read the article

  • Tomcat running, catalina throwing exception

    - by Mark Steudel
    So I have to preface that I'm not familiar with tomcat/catalina, but trying to troubleshoot this anyway. Anyway I see in /var/log/tomcat5/catalina.out I'm seeing these errors: Using CATALINA_BASE: /usr/share/tomcat5 Using CATALINA_HOME: /usr/share/tomcat5 Using CATALINA_TMPDIR: /usr/share/tomcat5/temp Using JRE_HOME: java.lang.ClassNotFoundException: org.apache.catalina.startup.Catalina at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:223) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:410) I'm not really sure what this means. This installation was working a week ago ... did something get corrupted? How would I figure if it did ... what other information would be valuable here? Tomcat seems to be running and starting up fine ... UPDATE: this might be related: Jun 19, 2011 11:00:25 PM org.apache.coyote.http11.Http11BaseProtocol pause INFO: Pausing Coyote HTTP/1.1 on http-9080 Jun 19, 2011 11:00:26 PM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina log4j:ERROR LogMananger.repositorySelector was null likely due to error in class reloading, using NOPLoggerRepository. Some more stuff in the logs: 2011-06-12 23:04:45,223 INFO [main] [com.atlassian.confluence.lifecycle] contextInitialized Starting Confluence 3.1.1 (build #1724) 2011-06-12 23:04:45,663 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from c lass path resource [bootstrapContext.xml] 2011-06-12 23:04:46,134 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from c lass path resource [setupContext.xml] 2011-06-12 23:04:46,236 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from c lass path resource [bootstrapCacheContext.xml] 2011-06-12 23:04:47,571 INFO [main] [atlassian.plugin.manager.DefaultPluginManager] init Initialising the plugin system 2011-06-12 23:04:48,338 INFO [main] [atlassian.plugin.manager.DefaultPluginManager] init Plugin system started in 0:00:00.748 Jun 12, 2011 11:05:05 PM org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.<init>(Socket.java:372) at java.net.Socket.<init>(Socket.java:186) at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:395) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:344) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:435) Jun 12, 2011 11:05:44 PM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.pa th: /usr/java/jdk1.6.0_18/jre/lib/i386/client:/usr/java/jdk1.6.0_18/jre/lib/i386:/usr/java/jdk1.6.0_18/jre/../lib/i386:/usr/java/packag es/lib/i386:/lib:/usr/lib CLEAN LOG OUTPUT FROM STARTING TOMCAT: Using CATALINA_BASE: /usr/share/tomcat5 Using CATALINA_HOME: /usr/share/tomcat5 Using CATALINA_TMPDIR: /usr/share/tomcat5/temp Using JRE_HOME: java.lang.ClassNotFoundException: org.apache.catalina.startup.Catalina at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:223) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:410) So I did a /etc/rc.d/init.d/tomcat status and I get this: [wqadm1n@ip-72-167-51-178 proc]$ sudo /etc/rc.d/init.d/tomcat5 status /etc/rc.d/init.d/tomcat5 is stopped [wqadm1n@ip-72-167-51-178 proc]$ sudo /etc/rc.d/init.d/tomcat5 start Starting tomcat5: [ OK ] [wqadm1n@ip-72-167-51-178 proc]$ sudo /etc/rc.d/init.d/tomcat5 status lock file found but no process running for pid 30774

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Visual Studio Extensions

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/18/visual-studio-extensions.aspxAs a product, Visual Studio has been around for a long time. In fact, it’s been 18 years since the first Visual Studio product was launched. In that time, there have been some major changes but perhaps the most important (or at least influential) changes for the course of the product have been in the last few years. While we can argue over what was and wasn’t an important change or what has and hasn’t changed, I want to talk about what I think is the single most important change Microsoft has made to Visual Studio. Specifically, I’m referring to the Visual Studio Gallery (first introduced in Visual Studio 2010) and the ability for third-parties to easily write extensions which can add new functionality to Visual Studio or even change existing functionality. I know Visual Studio had this ability before the Gallery existed, but it was expensive (both from a financial and development resource) perspective for a company or individual to write such an extension. The Visual Studio Gallery changed all of that. As of today, there are over 4000 items in the Gallery. Microsoft itself has over 100 items in the Gallery and more are added all of the time. Why is this such an important feature? Simply put, it allows third-parties (companies such as JetBrains, Telerik, Red Gate, Devart, and DevExpress, just to name a few) to provide enhanced developer productivity experiences directly within the product by providing new functionality or changing existing functionality. However, there is an even more important function that it serves. It also allows Microsoft to do the same. By providing extensions which add new functionality or change existing functionality, Microsoft is not only able to rapidly innovate on new features and changes but to also get those changes into the hands of developers world-wide for feedback. The end result is that these extensions become very robust and often end up becoming part of a later product release. An excellent example of this is the new CodeLens feature of Visual Studio 2013. This is, perhaps, the single most important developer productivity enhancement released in the last decade and already has huge potential. As you can see, out of the box CodeLens supports showing you information about references, unit tests and TFS history.   Fortunately, CodeLens is also accessible to Visual Studio extensions, and Microsoft DevLabs has already written such an extension to show code “health.” This extension shows different code metrics to help make sure your code is maintainable. At this point, you may have already asked yourself, “With over 4000 extensions, how do I find ones that are good?” That’s a really good question. Fortunately, the Visual Studio Gallery has a ratings system in place, which definitely helps but that’s still a lot of extensions to look through. To that end, here is my personal list of favorite extensions. This is something I started back when Visual Studio 2010 was first released, but so much has changed since then that I thought it would be good to provide an updated list for Visual Studio 2013. These are extensions that I have installed and use on a regular basis as a developer that I find indispensible. This list is in no particular order. NuGet Package Manager for Visual Studio 2013 Microsoft CodeLens Code Health Indicator Visual Studio Spell Checker Indent Guides Web Essentials 2013 VSCommands for Visual Studio 2013 Productivity Power Tools (right now this is only for Visual Studio 2012, but it should be updated to support Visual Studio 2013.) Everyone has their own set of favorites, so mine is probably not going to match yours. If there is an extension that you really like, feel free to leave me a comment!

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >