Search Results

Search found 28760 results on 1151 pages for 'search folder'.

Page 572/1151 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Monitor file selection in explorer (like clipboard monitoring) in C#

    - by Christian
    Hi, I am trying to create a little helper application, one scenario is "file duplication finder". What I want to do is this: I start my C# .NET app, it gives me an empty list. Start the normal windows explorer, select a file in some folder The C# app tells me stuff about this file (e.g. duplicates) How can I monitor the currently selected file in the "normal" windows explorer instance. Do I have to start the instance using .NET to have a handle of the process. Do I need a handle, or is there some "global hook" I can monitor inside C#. Its a little bit like monitoring the clipboard, but not exactly the same... Any help is appreciated (if you don't have code, just point me to the right interops, dlls or help pages :-) Thanks, Chris EDIT 1 (current source, thanks to Mattias) using SHDocVw; using Shell32; public static void ListExplorerWindows() { foreach (InternetExplorer ie in new ShellWindowsClass()) DebugExplorerInstance(ie); } public static void DebugExplorerInstance(InternetExplorer instance) { Debug.WriteLine("DebugExplorerInstance ".PadRight(30, '=')); Debug.WriteLine("FullName " + instance.FullName); Debug.WriteLine("AdressBar " + instance.AddressBar); var doc = instance.Document as IShellFolderViewDual ; if (doc != null) { Debug.WriteLine(doc.Folder.Title); foreach (FolderItem item in doc.SelectedItems()) { Debug.WriteLine(item.Path); } } }

    Read the article

  • fitnesse test framework, arbitrary properties for test and queries/test runs based on them?

    - by Marcel
    hi, our testers have the requirement to store multiple properties for a test that are not present in the "properties". e.g. they want to store priority, a description(not in the wiki page itself) and so on. they don't want to use the tagging mechanism. is there a way to store any kind of new xml node in the properties.xml for a test? these properties should then be used to: query the fields via the search screen run tests based on the "SuiteResponder" ?suite=xxx&TAGx=abc&TAGy=cde they should be returned by "?properties" responder. they should appear in the test history of the test run in essence they want to store any kind of "meta" information in the properties.xml and work with them in all kinds of ways, search, run etc. does anybody here know if there is already something available in that direction? if not i think we have to "pimp" these features into fitnesse to make our testers happy. thanks a lot any help appreciated marcel ps: i've also posted the question in the yahoo fitnesse group

    Read the article

  • NSFetchedResultsController: changing predicate not working?

    - by icerelic
    Hi, I'm writing an app with two tables on one screen. The left table is a list of folders and the right table shows a list of files. When tapped on a row on the left, the right table will display the files belonging to that folder. I'm using Core Data for storage. When the selection of folder changes, the fetch predicate of the right table's NSFetchedResultsController will change and perform a new fetch, then reload the table data. I used the following code snippet: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"list = %@",self.list]; [fetchedResultsController.fetchRequest setPredicate:predicate]; NSError *error = nil; if (![[self fetchedResultsController] performFetch:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } [table reloadData]; However the fetch results are still the same. I've NSLog'ed "predicate" before and after the fetch, and they were correct with updated information. The fetch results stay the same as initial fetch (when view is loaded). I'm not very familiar with the way Core Data fetches objects (is there a caching system?), but I've done similar things before(changing predicates, re-fetching data, and refreshing table) with single table views and everything went well. If someone could gave me a hint I would be very appreciated. Thanks in advance.

    Read the article

  • GoDaddy Subdomain Hosting Issue/Question with Disk Access (C#/ASP.NET 3.5)

    - by Vogel
    This isn't a very complicated scenario really, but as I start to type out the problem I'm realizing how convoluted it can become textually. Let me try and be very clear: First, the set up... I have a C#/ASP.NET web application that is publicly facing on my main domain (www), let's call it www.mysite.com. Nothing fancy, just a front-end that connects to SQL to display records. Then, I have a second C#/ASP.NET web application that is secured using forms authentication running on a subdomain, let's call it admin.mysite.com. This is a very light-weight CMS system to administer the public site. Now, the problem... Both of these sites run fine for basic tasks, however, my problem arises when I try to gain access to the file system for uploading. GoDaddy requires subdomains to run as a virtual directories under the main application in IIS (so the subdomains actually resolve/re-direct to www.mysite.com/admin when you type in admin.mysite.com), but because of this I am unable to write to my website root from the subfolder. Let me explain a little more... The CMS system (running as a virtual directory) gives the admin the ability to upload photos for display on the main site, the target folder of which is www.mysite.com/images - when attempting disk access from the root app, I am able to write to the virtual directory, but cannot do the opposite -- that is, write to the root from the virtual directory, getting security violations. If I can only upload to the /admin/ virtual directory, the entire point is moot because it's a secured folder that the public can't see! The only solution I can think of is to upload the files to the /admin/ virtual directory, then call a URL in the root that moves files from /admin/ back to the root, but that is entirely ghetto. I hope this post makes sense. Anyone else experience anything like this? The bottom line is that it seems virtual directories ONLY have access to themselves, and not their parent directories, no matter what credentials are used. Thanks!

    Read the article

  • Deliver large volume of automatic notification emails without being throttled

    - by jack
    I think most website has certain needs to deliver emails to its users, e.g. account activation emails, private messsage notification, comment notification, etc. Take my site as example, among 5,000 registered users, about 1,500 signed up using gmail.com box, 1,000 using yahoo.com and another 1,000 using hotmail.com. Every now and then I receive complaints from users that they never receive account activation email, sometime it goes to junk folder sometimes it just not show in any folder. Maybe it's kind of being "throttled" when exceeded maximum number of messages sent from same ip address to gmail.com/yahoo.com/hotmail.com during certain period of time? I'm using Postfix and there seems no problem with configuration since 90% of emails can be delivered to gmail.com/yahoo.com/hotmail.com boxes successfully. I noticed twitter is delivering millions of such automatic notifications to its users but I never missed a message from them. How do they archive this? Is there a permanent white list on gmail.com, yahoo.com or hotmail.com? Thanks in advance.

    Read the article

  • Simple ASP.NET MVC views without writing a controller

    - by Jake Stevenson
    We're building a site that will have very minimal code, it's mostly just going to be a bunch of static pages served up. I know over time that will change and we'll want to swap in more dynamic information, so I've decided to go ahead and build a web application using ASP.NET MVC2 and the Spark view engine. There will be a couple of controllers that will have to do actual work (like in the /products area), but most of it will be static. I want my designer to be able to build and modify the site without having to ask me to write a new controller or route every time they decide to add or move a page. So if he wants to add a "http://mysite.com/News" page he can just create a "News" folder under Views and put an index.spark page within it. Then later if he decides he wants a /News/Community page, he can drop a community.spark file within that folder and have it work. I'm able to have a view without a specific action by making my controllers override HandleUnknownAction, but I still have to create a controller for each of these folders. It seems silly to have to add an empty controller and recompile every time they decide to add an area to the site. Is there any way to make this easier, so I only have to write a controller and recompile if there's actual logic to be done? Some sort of "master" controller that will handle any requests where there was no specific controller defined?

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • How to write a custom solution using a python package, modules etc

    - by morpheous
    I am writing a packacge foobar which consists of the modules alice, bob, charles and david. From my understanding of Python packages and modules, this means I will create a folder foobar, with the following subdirectories and files (please correct if I am wrong) foobar/ __init__.py alice/alice.py bob/bob.py charles/charles.py david/david.py The package should be executable, so that in addition to making the modules alice, bob etc available as 'libraries', I should also be able to use foobar in a script like this: python foobar --args=someargs Question1: Can a package be made executable and used in a script like I described above? Question 2 The various modules will use code that I want to refactor into a common library. Does that mean creating a new sub directory 'foobar/common' and placing common.py in that folder? Question 3 How will the modules foo import the common module ? Is it 'from foobar import common' or can I not use this since these modules are part of the package? Question 4 I want to add logic for when the foobar package is being used in a script (assuming this can be done - I have only seen it done for modules) The code used is something like: if __name__ == "__main__": dosomething() where (in which file) would I put this logic ?

    Read the article

  • Execute javascript on IIS server

    - by James Westgate
    I have the following situation. A customer uses JavaScript with jQuery to create a complex website. We would like to use JavaScript and jQuery on the server (IIS) for the following reasons: Skills transfer - we would like to use JavaScript and jQuery on the server and not have to use eg VB Script. / classic asp. .Net framework/Java etc is ruled out because of this. Improved options for search/accessibility. We would like to be able to use jQuery as a templating system, but this isn't viable for search engines and users with js turned off - unless we can selectively run this code on the server. There is significant investment in IIS and Windows Server, so changing that is not an option. I know you can run jScript on IIS using windows Script host, but am unsure of the scalability and the process surrounding this. I am also unsure whether this would have access to the DOM. Here is a diagram that hopefully explains the situation. I was wondering if anyone has done anything similar?

    Read the article

  • What does the subversion error "Could not read status line" mean?

    - by Jergason
    Exact duplicate: SVN: Could not read status line: connection was closed by server This is not an exact duplicate. The other question was asking about getting the error in a specific situation, and the answer was vauge at best. This is a fairly basic question, but it is driving me nuts. I have set up a brand new repository at beanstalk.com. They give me the url, http://.svn.beanstalkapp.com/blog. They also automatically create the tag, trunk and branches folder in the repository. I have checked out the trunk folder and used svn add to add the new file. I am trying to do my first commit, but I get this error: Commit failed (details follow): CHECKOUT of '/foo/!svn/bln/1': Could not read status line: connection was closed by server. (http://user_name@my_name.svn.beanstalkapp.com) What does this mean, and what causes it? I have googled for a definition of what "Could not read status line" means, but was unable to find anything explaining it. edit: I was getting this error while trying to manipulate my repository from behind a firewall. I still don't know what was causing it, but I don't have this problem at home. Strangeness.

    Read the article

  • Reinstalling assemblies for new user after reboot, why?

    - by Marko Benko
    Hi - I have Installshield InstallScript MSI aka "Full" setup and Installshield Basic MSI aka "Patch" setup. Full setup copies some files to GAC, some to folder, etc. Patch setup replaces some files in GAC and some in installation folder. How ingenious, isn't it? :) Also, patch setup is designed that none of its actions are visible after installation. I'm changing some properties in sequences for that(damn, can't remember which ones, will look it up). When patch is applied, application works well(administrator user), but when rebooting a computer and logging in as a different (just domain, not admin) user, application doesn't work. In trace I have found an error line stating that installation of one of the components(to be precise, component which puts files to GAC) failed. Says that there is no installation source for it... Why is this so? Setup is set to install for everyone, patch is just replacing some files, why does it need to "install" something when a new user logs in? Thanks, Marko

    Read the article

  • 550 Error When I try to get the size of a file on an FTP

    - by Eric
    I'm trying to use an FtpWebRequest to get the size of a file on a company FTP. Yet whenever I try to get the response an exception is thrown. See the error details in the catch block in the code below. string uri = "ftp://ftp.domain.com/folder/folder/file.xxx"; FtpWebRequest sizeReq = (FtpWebRequest)WebRequest.Create(uri); sizeReq.Method = WebRequestMethods.Ftp.GetFileSize; sizeReq.Credentials = cred; sizeReq.UsePassive = proj.ServerConfig.UsePassive; //true sizeReq.UseBinary = proj.ServerConfig.UseBinary; //true sizeReq.KeepAlive = proj.ServerConfig.KeepAlive; //false long size; try { //Exception thrown here when I try to get the response using (FtpWebResponse fileSizeResponse = (FtpWebResponse)sizeReq.GetResponse()) { size = fileSizeResponse.ContentLength; } } catch(WebException exp) { FtpWebResponse resp = (FtpWebResponse)exp.Response; MessageBox.Show(exp.Message); // "The remote server returned an error: (550) File unavailable (e.g., file not found, no access)." MessageBox.Show(exp.Status.ToString()); //ProtcolError MessageBox.Show(resp.StatusCode.ToString()); // ActionNotTakenFileUnavailable MessageBox.Show(resp.StatusDescription.ToString()); //"550 SIZE: Operation not permitted\r\n" } This code does work, however, when connected to my personal FTP. The StatusDescription of the response says that the operation is "not permitted". Could it be that my office FTP just wont allow for the querying of a file size? I've also tried listing the directory details, which will return the size, and have noticed that my office FTP reports the directory details in a different format then my personal FTP. Maybe this is the problem? //work ftp ListDirectoryDetails -rw-r--r-- 1 (?) user 12345 Nov 16 20:28 some file name.xxx //personal ftp ListDirectoryDetails -rw-r--r-- 1 user user 12345 Mar 13 some file name.xxx From reading this blog post I think that my personal ftp is returning a Unix formatted response, but my work is returning a windows formatted response. Maybe this is unrelated but I thought I'd mention it.

    Read the article

  • How can I extend this SQL query to find the k nearest neighbors?

    - by Smigs
    I have a database full of two-dimensional data - points on a map. Each record has a field of the geometry type. What I need to be able to do is pass a point to a stored procedure which returns the k nearest points (k would also be passed to the sproc, but that's easy). I've found a query at http://blogs.msdn.com/isaac/archive/2008/10/23/nearest-neighbors.aspx which gets the single nearest neighbour, but I can't figure how to extend it to find the k nearest neighbours. This is the current query - T is the table, g is the geometry field, @x is the point to search around, Numbers is a table with integers 1 to n: DECLARE @start FLOAT = 1000; WITH NearestPoints AS ( SELECT TOP(1) WITH TIES *, T.g.STDistance(@x) AS dist FROM Numbers JOIN T WITH(INDEX(spatial_index)) ON T.g.STDistance(@x) < @start*POWER(2,Numbers.n) ORDER BY n ) SELECT TOP(1) * FROM NearestPoints ORDER BY n, dist The inner query selects the nearest non-empty region and the outer query then selects the top result from that region; the outer query can easily be changed to (e.g.) SELECT TOP(20), but if the nearest region only contains one result, you're stuck with that. I figure I probably need to recursively search for the first region containing k records, but without using a table variable (which would cause maintenance problems as you have to create the table structure and it's liable to change - there're lots of fields), I can't see how.

    Read the article

  • Linker flags for one library break loading of another

    - by trevrosen
    I'm trying to use FMOD and HTTPriot in the same app. FMOD works fine until I add in linker flags for HTTPriot, at which point I get a bunch of linking errors wherein FMOD is complaining about undefined symbols. In other words, adding in linker flags for HTTPriot seems to break the loading of FMOD's library. These are the kinds of errors I'm getting, all coming during the linking phase of my build: Undefined symbols: "_FMOD_Sound_Lock", referenced from: -[FMODEngine recordedSoundAsNSData] in FMODEngine.o -[FMODEngine writeRecordingToDiskWithName:] in FMODEngine.o "_FMOD_MusicSystem_PrepareCue", referenced from: -[FMODEngine addCue:] in FMODEngine.o These are the linker flags for HTTPriot: -lhttpriot -lxml2 -ObjC -all_load I added those as well as a path to the HTTPriot SDK per the instructions here: http://labratrevenge.com/httpriot/docs/iphone-setup.html I was hoping someone could enlighten me on why adding linker flags for one library might cause a failure of another to load. If I DON'T have these flags in, HTTPriot and FMOD both work fine on the simulator, but HTTPriot has runtime errors on the device, I assume because its libraries are not linked. FMOD works fine on the device though. I placed header search paths and library search paths in my build settings in order for XCode to find FMOD. That seemed to be OK until I tried adding these HTTPriot linker flags. I also tried adding a linker flag for the FMOD library (-lfmodex), but I get the same errors as I do without it.

    Read the article

  • Query notation for the sitecore 'source' field in template builder

    - by M.R.
    I am trying to set the the source field of a template using the query notation (or xpath - whichever works), but none of them seems to be working. My content tree is a multisite content tree: France --Page 1 ----Page1A -------Page1AA --Page 2 --Page 3 --METADATA ----Regions US --Page 1 ----Page1A -------Page1AA --Page 2 --Page 3 --METADATA ----Regions Each site has its own METADATA folder, and I want it so that when adding a page inside each of the main country nodes, I want the values to reflect whatever is in the METADATA of that site. I have two different fields for now - a droplink and a treelistex field. So I thought I can just get the parent item that is a country site, and get the metadata folder for that. When I put the following query in both the fields, I get different results: query:./ancestor::*[@@templatename='CountryHome']/METADATA/Regions/* For the droplink field, I get only the first Region (one item) For the treelistex field, I get the entire content tree I then tried to modify the query a little bit and took the 'query' notation out ./ancestor::*[@@templatename='CountryHome']/METADATA/Regions/* If I go to the developer center/xpath builder, and set the context node to any item underneath the main country site, it returns me exactly what I need, but when I put this in the source, I get the entire content tree in both the cases. Help!

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • WPF XAML references not resolved via myAssembly.GetReferencedAssemblies()

    - by WPF-it
    I have a WPF container application (with ContentControl host) and a containee application (UserControl). Both are oblivious of each other. Only one XML config file holds the string dllpath of the containee's DLL and full namespace name of the ViewModelClass inside the containee. A generic code in container resolves containee's assembly (Assembly.LoadFrom(dllpath)) and creates the viewmodel's instance using Activator.CreateInstance(vmType). when this viewmodel is hosted inside the ContentControl of the container, and relevant vierwmodel specific ResourceDictionary is added to ContentControl.Resources.MergedDictionaries of the content control of container, so the view loads fine. Now my containee has to host the WPF DataGrid using assembly reference of WPFToolkit.dll from my local C:\Lib folder. The Copy Local reference to the WPFToolkit.dll is added to the .csproj file of the containee's project and its only referred in the UserControl.XAML using its XAML namepsace. This way my bin\debug folder in my containee application, gets the WPFToolkit.dll copied. XAML: xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" <Controls:DataGrid ItemsSource="{Binding AssetList}" ... /> Issue: The moment the ViewModel (i.e. the containee's usercontrol) tries to load itself I get this error. "Cannot find type 'Microsoft.Windows.Controls.DataGrid'. The assembly used when compiling might be different than that used when loading and the type is missing." Hence I tried to load the referenced assemblies of the containee's assembly (myAssembly.GetReferencedAssemblies()) before the viewmodel is hosted. But WPFToolkit isnt there in that list of assemblies! Strange thing is I have another dll referred called Logger.dll in the containee codebase but this one is implemented using C# code behind. So I get its reference correctly resolved in myAssembly.GetReferencedAssemblies(). So does that mean BAML references of assemblies are never resolvable by GetReferencedAssemblies?

    Read the article

  • problem with MANIFEST.MF in jar

    - by dhananjay
    I have created my jar file in the following folder: /usr/local/bin/niidle.jar And I have one jarfile which is in the following folder /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar And this file 'hector-0.6.0-17.jar' I have to include in MANIFEST.MF in jar. And when I mention class path in MANIFEST.MF as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar When I run this using command: java -jar /usr/local/bin/niidle.jar It works properly.. But I dont want to give full Class-Path name, I have to give Class-Path as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: lib/hector-0.6.0-17.jar And when I run this using command: java -jar /usr/local/bin/niidle.jar It is showing error message: Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more Please tell me solution for that...

    Read the article

  • Will rel=canonical break site: queries ?

    - by Justin Grant
    Our company publishes our software product's documentation using a custom-built content management system using a dynamic URL namespace like this: http://ourproduct.com/documentation/version/pageid Where "version" is the version number to which the documentation applies, and "pageid" is a unique string which identifies that page in our back-end content management system. For example, if content (e.g. a page about configuration best practices) is unchanged from version 3.0 and 4.0 of our product, it'd be reachable by two different URLs: http://ourproduct.com/documentation/3.0/configuration-best-practices http://ourproduct.com/documentation/4.0/configuration-best-practices This URL scheme allows us to scope Google search results to see only documentaiton for a particular product version, like this: configuration site:ourproduct.com/documentation/4.0 But when the user is searching across all versions, we don't want Google to arbitrarily choose one of the URLs to show in results. Instead, we always want the latest version to show up. Hence our planned use of rel=canonical so we can proscriptively tell Google which URL we want to show up if multiple versions are being searched. (Users who do oddball things like searching 2 versions but not all of them are a corner case, so we don't care which version(s) show up in that case-- the primary use-cases we care about is searching one version or searching all versions) But what will happen to scoped searches if we do this? If my rel=canonical URL points to version 4.0, but my search is scoped to 3.0, will Google return a result? Even if you don't know the answer offhand, do you know a site which uses rel=canonical to redirect across folders in a URL namespace. If so, I could run a few Google searches and figure out the answer.

    Read the article

  • aspnet_compiler -fixednames does not work?

    - by Terrence
    I am unable to get the -fixednames switch to create dlls for the cs code behind files. The files in the bin folder are compiled aspx pages, but the code behind files are all compiled into one large websitename.dll file. Here is my command with switches. aspnet_compiler -v / -p E:\Source\DotNet4\mysolution\website -f -d -fixednames E:\Source\DotNet4\CompiledWebSite This produces many files in the bin folder. website.dll and website.pdb (contains code behind) myform1.aspx.643c7876.dll (compiled aspx layout ui) I have tested this over and over to make sure I am not missing anything. The test is place a label on myform1.aspx, and in the codebehind populate the label with some text. Compile the website with the above switches and deploy the website. Make a change to the myform1 codebehind and change the label text. Compile and only deploy the myform1.aspx.643c7876.dll to the website. Result: label is still the same. Now deploy the website.dll and pdb and the label changes. Can anyone tell me how to get -fixednames to create sinle dlls for codebehind?

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • J2ME cache issue

    - by kiennt
    I have to write a J2ME app to retrieve images from server and display in mobile phone. I have seen and test that Snaptu have a mechanism to cache image, event with 100 images (both normal size and zoom size). I wonder how they can do that? I though that those guys use rms to save image stream to data. But when i check in working folder of simulater( I use Windows XP and Sun Wireless Toolkit 3.0, the Emulator device i use to run my program is CLDC Device 1 - my working folder is C:\Document And Settings\Administrator\javame-sdk\3.0\work\6\appdb), i see some .db file. When i delete these files, i still can view cache image in my emulator???? I also thought that those guys use heap memory to save image. But it is not correct because when i set limit device memory is 2MB (like some mobile phones), and i load and view 100 images in zoom size, it didn't make OutOfMemory Error? It so weird. Any one can help me? Thanks

    Read the article

  • MVC.NET custom validator is not working

    - by IvanMushketyk
    I want to write a custom validator for MVC.NET framework that checks if entered date is in the future. To do it, I wrote the following class: [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] public sealed class InTheFutureAttribute : ValidationAttribute, IClientValidatable { private const string DefaultErrorMessage = "{0} should be date in the future"; public InTheFutureAttribute() : base(DefaultErrorMessage) { } public override string FormatErrorMessage(string name) { return string.Format(ErrorMessageString, name); } public override bool IsValid(object value) { DateTime time = (DateTime)value; if (time < DateTime.Now) { return false; } return true; } public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context) { var clientValidationRule = new ModelClientValidationRule() { ErrorMessage = FormatErrorMessage(metadata.GetDisplayName()), ValidationType = "wrongvalue" }; return new[] { clientValidationRule }; } } and added attribute to field that I want to check. On the View page I create input field in the following way: <div class="editor-label-search"> @Html.LabelFor(model => model.checkIn) </div> <div class="editor-field-search-date"> @Html.EditorFor(model => model.checkIn) <script type="text/javascript"> $(document).ready(function () { $('#checkIn').datepicker({ showOn: 'button', buttonImage: '/Content/images/calendar.gif', duration: 0, dateFormat: 'dd/mm/yy' }); }); </script> @Html.ValidationMessageFor(model => model.checkIn) </div> When I submit the form for the controller that requires model with checked attribute code in my validator is called and it returns false, but instead of displaying an error it just call my controller's action and send invalid model to it. Am I doing something wrong? How can I fix it? Thank you in advance.

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • I can't set a border around my StackPanel. Any help?

    - by Sergio Tapia
    Here's my XAML code: <Window x:Class="CarFinder.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Search for cars in TuMomo" Height="480" Width="600"> <DockPanel Margin="8"> <Border CornerRadius="6" BorderBrush="Gray" Background="LightGray" BorderThickness="2" Padding="8"> <StackPanel Orientation="Horizontal" DockPanel.Dock="Top" Height="25"> <TextBlock FontSize="14" Padding="0 0 8 0"> Search: </TextBlock> <TextBox x:Name="txtSearchTerm" Width="400" /> <Image Source="/CarFinder;component/Images/Chrysanthemum.jpg" /> </StackPanel> </Border> <StackPanel Orientation="Horizontal" DockPanel.Dock="Top" Height="25"> </StackPanel> </DockPanel> </Window> The border is set around the entire window. And also, when I create another StackPanel it's added to the right of my previous StackPanel instead of being added under it. What's the reason for this?

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >