Search Results

Search found 8335 results on 334 pages for 'msbuild extension pack'.

Page 81/334 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Should I create a new extension for an xml file?

    - by macleojw
    I'm working with a data model stored in XML files. I want to create some metadata for the model and store it alongside, but would like to be able to distinguish between the two. The data model is imported into some software from time to time and we don't want it to try to import the meta data files. To get round this, I've been thinking of creating a new extension for the metadata xml files (say .mdml). Is this good practice?

    Read the article

  • Drupal: png is not allowed extension (but it is!)

    - by Patrick
    hi, I'm running Drupal on IIS server 6. I'm having an interesting error to upload images into my node. I get "png" is not a known file. See image: http://dl.dropbox.com/u/72686/pngError.png But I've added png (default settings) to the allow image files list, and indeed you can see in the same image, that png is an allowed extension. Thanks

    Read the article

  • How to get Tkinter to input text and submit with button

    - by Rob
    Hi I was wondering if anybody could help me submit code from the test fields to the login fields from Tkinter import * import tkMessageBox if ( __name__ == "__main__" ): import resources.lib.mechanize as mechanize mechanize # Start Browser br = mechanize.Browser(factory=mechanize.RobustFactory()) # User-Agent (Firefox) br.addheaders = [('User-agent', 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.6')] br.open('http://razetheworld.com/wp-login.php?redirect_to=http%3A%2F%2Frazetheworld.com') br.select_form(name="loginform") br['log'] = 'entryWidget_U must enter here' br['pwd'] = 'entryWidget_P must enter here' br.submit(name="wp-submit") print br.geturl() def displayText(): """ Display the Entry text value. """ global entryWidget_U global entryWidget_P if entryWidget_U.get().strip() == "": tkMessageBox.showerror("Tkinter Entry Widget", "Enter a Username") else: tkMessageBox.showinfo("Tkinter Entry Widget", "Text value =" + entryWidget_U.get().strip()) if entryWidget_P.get().strip() == "": tkMessageBox.showerror("Tkinter Entry Widget", "Enter a Password") else: tkMessageBox.showinfo("Tkinter Entry Widget", "Text value =" + entryWidget_P.get().strip()) if __name__ == "__main__": root = Tk() root.title("Tkinter Entry Widget") root["padx"] = 40 root["pady"] = 20 # Create a text frame to hold the text Label and the Entry widget textFrame_U = Frame(root) textFrame_P = Frame(root) #Create a Label in textFrame entryLabel = Label(textFrame_U) entryLabel["text"] = "Enter Username:" entryLabel.pack(side=LEFT) entryLabel = Label(textFrame_P) entryLabel["text"] = "Enter Password:" entryLabel.pack(side=LEFT) # Create an Entry Widget in textFrame entryWidget_U = Entry(textFrame_U) entryWidget_U["width"] = 50 entryWidget_U.pack(side=LEFT) entryWidget_P = Entry(textFrame_P) entryWidget_P["width"] = 50 entryWidget_P.pack(side=LEFT) textFrame_U.pack() textFrame_P.pack() button = Button(root, text="Login", command=#Run br.submit(name="wp-submit")) button.pack() root.mainloop()

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Unable to use OpenGL or install nVidia driver on openSUSE 12.2

    - by djechelon
    I have an ASUS N76VZ laptop with 12.2 openSUSE and GeForce GT650M card. I found that KDE doesn't allow me to use OpenGL rendering. I tried to install nVidia's driver from script but once it writes the xorg.conf file I'm unable to boot desktop. I have the following errors in system log Oct 30 08:28:13 RAYNOR kdm[2727]: X server died during startup Oct 30 08:28:13 RAYNOR kdm[2727]: X server for display :0 cannot be started, session disabled I noticed that the /etc/X11/xorg.conf backup file was empty, so I renamed the new xorg.conf and left none: the desktop booted!!! How can I fix OpenGL rendering with or without driver installation? [Update]: Xorg.0.log says [ 1434.207] compiled for 4.0.2, module version = 1.0.0 [ 1434.207] Module class: X.Org Server Extension [ 1434.207] (II) NVIDIA GLX Module 304.60 Sun Oct 14 20:44:54 PDT 2012 [ 1434.207] (II) Loading extension GLX [ 1434.207] (II) LoadModule: "record" [ 1434.207] (II) Loading /usr/lib64/xorg/modules/extensions/librecord.so [ 1434.207] (II) Module record: vendor="X.Org Foundation" [ 1434.207] compiled for 1.12.3, module version = 1.13.0 [ 1434.207] Module class: X.Org Server Extension [ 1434.207] ABI class: X.Org Server Extension, version 6.0 [ 1434.207] (II) Loading extension RECORD [ 1434.207] (II) LoadModule: "dri" [ 1434.207] (II) Loading /usr/lib64/xorg/modules/extensions/libdri.so [ 1434.207] (II) Module dri: vendor="X.Org Foundation" [ 1434.207] compiled for 1.12.3, module version = 1.0.0 [ 1434.207] ABI class: X.Org Server Extension, version 6.0 [ 1434.207] (II) Loading extension XFree86-DRI [ 1434.207] (II) LoadModule: "nvidia" [ 1434.208] (II) Loading /usr/lib64/xorg/modules/drivers/nvidia_drv.so [ 1434.208] (II) Module nvidia: vendor="NVIDIA Corporation" [ 1434.208] compiled for 4.0.2, module version = 1.0.0 [ 1434.208] Module class: X.Org Video Driver [ 1434.208] (II) NVIDIA dlloader X Driver 304.60 Sun Oct 14 20:24:42 PDT 2012 [ 1434.208] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 1434.208] (++) using VT number 8 [ 1434.320] (EE) No devices detected. [ 1434.320] Fatal server error: [ 1434.320] no screens found [ 1434.320] Please consult the The X.Org Foundation support at http://wiki.x.org for help.

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • Maven antrun with sequential ant-contrib fails to run

    - by codevour
    We have a special routine to explode files in a subfolder into extensions, which will be copied and jared into single extension files. For this special approach I wanted to use the maven-antrun-plugin, for the sequential iteration and jar packaging through the dirset, we need the library ant-contrib. The upcoming plugin configuration fails with an error. What did I misconfigured? Thank you. Plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>validate</phase> <goals> <goal>run</goal> </goals> <configuration> <target> <for param="extension"> <path> <dirset dir="${basedir}/src/main/webapp/WEB-INF/resources/extensions/"> <include name="*" /> </dirset> </path> <sequential> <basename property="extension.name" file="${extension}" /> <echo message="Creating JAR for extension '${extension.name}'." /> <jar destfile="${basedir}/target/extension-${extension.name}-1.0.0.jar"> <zipfileset dir="${extension}" prefix="WEB-INF/resources/extensions/${extension.name}/"> <include name="**/*" /> </zipfileset> </jar> </sequential> </for> </target> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>ant-contrib</groupId> <artifactId>ant-contrib</artifactId> <version>1.0b3</version> <exclusions> <exclusion> <groupId>ant</groupId> <artifactId>ant</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.ant</groupId> <artifactId>ant-nodeps</artifactId> <version>1.8.1</version> </dependency> </dependencies> </plugin> Error [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project extension-platform: An Ant BuildException has occured: Problem: failed to create task or type for [ERROR] Cause: The name is undefined. [ERROR] Action: Check the spelling. [ERROR] Action: Check that any custom tasks/types have been declared. [ERROR] Action: Check that any <presetdef>/<macrodef> declarations have taken place. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

    Read the article

  • Best way to import a pack or "system" of new classes??

    - by Joe Blow
    Here's an Advanced question for Advanced developers. So I've written a largish "subsystem". It is essentially a UIViewController called CleverViewController which is a UIViewController. Now, there are a large number of supporting classes (about ten) that do the hard work: perform math functions, image processing, purely logical functions, build images or what have you with thousands of lines of code. (To do this, I simply started a new XCode project / app "Scratchpad" which does little other than load and launch the CleverViewController. So currently it works as an app, which launches CleverViewController. The ten or so classes I mention that are part of the "subsystem" simply sit there in that project/app.) So now, we will use CleverViewController, the new technology generally, in various apps. (Or perhaps friends would want to use it, etc.) What's the best way to "do" this? Have I screwed everything up, and really it should just be ONE (pretty big) class rather than a dozen classes? (I could understand that then as I would simply add that new (big) class where needed, like adding any other class.) Do I have to make a "framework" like the Apple frameworks? (If so, what the hell are they, how do you do it, etc?!?) In fact, do you just have to lamely include all of the dozen classes and that's that (obviously perhaps putting them in a grouped subfolder). What about all the headers and so on? (Currently I just have the dozen includes in the pch file of the scratchpad project.) Shouldn't it be easy to "maintain" this "subsystem" separately and so on? I'm afraid I know nothing about this: if the answer is obvious, hit me over the head and let me know. Thank you for any info on this !

    Read the article

  • Task scheduled to wake laptop - only works when lid is open

    - by JD Pack
    I am running Windows 7 Starter on an Acer Aspire One laptop. I want my laptop to automatically run a task (backup the HDD to a network drive) once a week in the middle of the night. I scheduled the task in "Task Scheduler" and checked the box to wake the computer to run the task. I also changed the advanced power settings to allow wake timers. This was half of the solution. It now works flawlessly when the lid is open... the computer can wake itself up from either sleep or hibernate mode to perform the backup. When the lid is closed however, its sleeping beauty. Any ideas? I don't want to have to remember to open the lid once a week. It sort of defeats the purpose of an "automatic" backup. Update: I discovered that it can wake from sleep (or hybrid sleep), but not from hibernate when the lid is closed. This is good news. I'd still be curious about how to get it to work from hibernate, but I'm pretty happy about waking from sleep at least.

    Read the article

  • iphone: how do I obtain a list of all files with a given extension on the app's document folder?

    - by Mike
    I have saved all my files using something like this NSString *fileName = [NSString stringWithFormat:@"%@.plist", myName]; NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *appFile = [documentsDirectory stringByAppendingPathComponent:fileName]; [myDict writeToFile:appFile atomically:YES]) //where myDic is a dictionary containing the value of all parameters I would like to save. All these files were saved with the .plist extension. The same directory contains several JPG and PNG files. Now, how do I retrieve the list of all .plist files on that directory? thanks for any help.

    Read the article

  • Why do I have to specify the -i switch with a backup extension when using ActivePerl?

    - by Zaid
    I cannot get in-place editing Perl one-liners running under ActivePerl to work unless I specify them with a backup extension: C:\> perl -i -ape "splice (@F, 2, 0, q(inserted text)); $_ = qq(@F\n);" file1.txt Can't do inplace edit without backup. The same command with -i.bak or -i.orig works a treat, but also creates an unwanted backup file in the process. Has anyone else faced similar issues? Is there a way around this?

    Read the article

  • Block all content on a web page for people using an Adblock-type browser add-on/extension?

    - by Rudiger
    I wish to block ALL my content from any users using an ad-blocking browser extension (ie. Adblock Plus for Firefox, Adthwart for Chrome). How can I acheive this? Is there a server-side solution? Client-side? Edit 1 This question regards the detection of ad-blocking browser extensions: http://stackoverflow.com/questions/1185067/detecting-adblocking-software I'm concerned with post-detection action. Edit 2 A duplicate question was asked after mine, so I thought I'd link to it here: http://stackoverflow.com/questions/2002403/prevent-adblock-users-from-accessing-website

    Read the article

  • How to create custom exception handler for custom controls or extension methods.

    - by Shantanu Gupta
    I am creating an extension method in C# to retrieve some value from datagridview. Here if a user gives column name that doesnot exists then i want this function to throw an exception that can be handled at the place where this function will be called. How can i achieve this. public static T Value<T>(this DataGridView dgv, int RowNo, string ColName) { if (!dgv.Columns.Contains(ColName)) throw new ArgumentException("Column Name " + ColName + " doesnot exists in DataGridView."); return (T)Convert.ChangeType(dgv.Rows[RowNo].Cells[ColName].Value, typeof(T)); }

    Read the article

  • how to write extension method to check string value is null or not.

    - by kumar
    Hi all, (LocalVariable)ABC.string(Name)= (Idatareader)datareader.GetString(0); this name value is coming from database.. what happening here is if this name value is null while reading it's throwing an exception? I am manually doing some if condition here. I don't want to write a manual condition to check all my variables.. I am doing something like this now.. string abc = (Idatareader)datareader.GetValue(0); if(abc = null) //assiging null else assiging abc value is there something like can we write extension method for this? thanks

    Read the article

  • How to code a C# Extension method to turn a Domain Model object into an Interface object?

    - by Dr. Zim
    When you have a domain object that needs to display as an interface control, like a drop down list, ifwdev suggested creating an extension method to add a .ToSelectList(). The originating object is a List of objects that have properties identical to the .Text and .Value properties of the drop down list. Basically, it's a List of SelectList objects, just not of the same class name. I imagine you could use reflection to turn the domain object into an interface object. Anyone have any suggestions for C# code that could do this? The SelectList is an MVC drop down list of SelectListItem. The idea of course is to do something like this in the view: <%= Html.DropDownList("City", (IEnumerable<SelectListItem>) ViewData["Cities"].ToSelectList() )

    Read the article

  • Netlogo behaviorspace, How to export each experiment's plot in a separate directory with a unique name? (Answer is by using pathdir extension)

    - by Marzy
    I have a save-plots function which I call it in my final command of behavior space to export selected plots, I could create unique names for plots so they don't replace each other, for example my output file name for one of the plots is like this: DeathRates, PS true, HCHA false 100,true.csv Let FileNameDeathRates ( word "D: /Results/" "DeathRates, " "PS "PS? ", HCHA " HCHA? MPS "," CV? ".csv" ) So far is good, but I want is to have a directory with name of the experiment and export all plots of that experiment in that directory for example: Let DirName( word "D:/Results/" "PS "PS? ", HCHA " HCHA? MPS "," CV? ) Netlogo already exports plots with proper name and I just want the directory name to be different , and of course prior to the experiment the directory does not exist so the directory should be created first , I could not find a command to create a directory in netlogo without pathdir extension. Any idea on how can I do this? Thanks in advance ;)

    Read the article

  • Clone LINQ To SQL object Extension Method throws object dispose exception....

    - by gtas
    Hello all, I have this extension method for cloning my LINQ To SQL objects: public static T CloneObjectGraph<T>(this T obj) where T : class { var serializer = new DataContractSerializer(typeof(T), null, int.MaxValue, false, true, null); using (var ms = new System.IO.MemoryStream()) { serializer.WriteObject(ms, obj); ms.Position = 0; return (T)serializer.ReadObject(ms); } } But while i carry objects with not all references loaded, while qyuerying with DataLoadOptions, sometimes it throws the object disposed exception, but thing is I don't ask for references that is not loaded (null). e.g. I have Customer with many references and i just need to carry on memory the Address reference EntityRef< and i don't Load anything else. But while i clone the object this exception forces me to load all the EntitySet< references with the Customer object, which might be too much and slow down the application speed. Any suggestions?

    Read the article

  • How can I detect if the Solution is initializing using the DTE in a VisualStudio extension?

    - by justin.m.chase
    I am using the DTE to track when projects are loaded and removed from the solution so that I can update a custom Test Explorer extension. I then trigger a container discovery process. But when the solution is first loaded it does an asynchronous load of some projects and fires a lot of Project Added events. What I would really like to do is to ignore all of these events until the solution is done loading. I can't quite figure out the order of events such that I know for sure that this initialization process has completed. It would be really nice to be able to just query the solution object and ask it. Does anyone know if there is a property or interface or event that I can use to determine this?

    Read the article

  • Global search box extension - how to make browser to start when suggestion picked.

    - by Ramps
    Hi, I'm implementing global search box extension (sth like SearchableDictionary sample in android sdk). Everything works fine - suggestions are displayed properly. Problem is that I want browser to start when user picks a suggestion. (each suggestion is a link) Columns of my cursor contain SearchManager.SUGGEST_COLUMN_INTENT_DATA, and I use that to pass http link. My searchable xml contains default intent action set to: android:searchSuggestIntentAction="android.intent.action.VIEW". But when user hits the suggestion, my application is started instead of browser. What am I missing? Regards!

    Read the article

  • How to make a file with .pt extension, with xml syntax highlighting and vim's plugin snipmate load p

    - by Somebody still uses you MS-DOS
    I have the following in my .vimrc: au BufNewFile,BufRead *.pt set filetype=xml This is needed because although I'm editing a file with *.pt extension, it's indeed a valid xml file: setting the filetype like this I can have syntax highlighting. I'm using vim's snipmate plugin, and tried to create pt.snippets to specific needs since these files are Zope Page Templates (ZPT with TAL). Now, I have a problem: I don't want to create these snippets in xml.snippets, since they aren't really generic xml snippets, but my *.pt files are set to xml, so when I define my pt snippets they aren't loaded unless I run :set filetype=pt on my pt file on vim - but then I lose syntax highlighting. I would like to be able to have a pt file, with xml syntax highlighting, to be able to load a pt.snippets file from snipmate. How can I do it? (I would like to avoid putting my snippets in a generic snippet file, I would like it to be present only in pt.snippets to be easier to maintain.)

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >