Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 685/836 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • How can I get at the raw bytes of the request in WCF?

    - by Gregory Higley
    For logging purposes, I want to get at the raw request sent to my RESTful web service implemented in WCF. I have already implemented IDispatchMessageInspector. In my implementation of AfterReceiveRequest, I want to spit out the raw bytes of the message even (and especially) if the content of the message is invalid. This is for debugging purposes. My service works perfectly already, but it is often helpful when working through problems with clients who are trying to call the service to know what it was they sent, i.e., the raw bytes. For example, let's say that instead of sending a well-formed XML document, they post the string "your mama" to my service endpoint. I want to see that that's what they did. Unfortunately using MessageBuffer::CreateBufferedCopy() won't work unless the contents of the message are already well-formed XML. Here's (roughly) what I already have in my implementation of AfterReceiveRequest: // The immediately following line raises an exception if the message // does not contain valid XML. This is uncool because I want // the raw bytes regardless of whether they are valid or not. using (MessageBuffer buffer = request.CreateBufferedCopy(Int32.MaxValue)) { using (MemoryStream stream = new MemoryStream()) using (StreamReader reader = new StreamReader(stream)) { buffer.WriteMessage(stream); stream.Position = 0; Trace.TraceInformation(reader.ReadToEnd()); } request = buffer.CreateMessage(); } My guess here is that I need to get at the raw request before it becomes a Message. This will most likely have to be done at a lower level in the WCF stack than an IDispatchMessageInspector. Anyone know how to do this?

    Read the article

  • Media recommendation engine - Single user system - How to start

    - by Microkernel
    Hi guys, I want to implement a media recommendation engine. I saw a similar posts on this, but I think my requirements are bit different from those, so posting here. Here is the deal. I want to implement a recommendation engine for media players like VLC, which would be an engine that has to care for only single user. Like, it would be embedded in a media player on a PC which is typically used by single user. And it will start learning the likes and dislikes of the user and gradually learns what a user likes. Here it will not be able to find similar users for using their data for recommendation as its a single user system. So how to go about this? Or you can consider it as a recommendation engine that has to be put in say iPods, which has to learn about a single user and recommend music/Movies from the collections it has. I thought of start collecting the genre of music/movies (maybe even artist name) that user watches and recommend movies from the most watched Genre, but it look very crude, isn't it? So is there any algorithms I can use or any resources I can refer up to? Regards, MicroKernel :)

    Read the article

  • Merging datasets based on 2 variables in SAS.

    - by John
    Hye Guys, my question is the following, i'm working with different databases, all contain information about 1000+ companies, a company is defined by its ticker code (the short version of the name( Ford as F) usually seen on stock quotation boards). Aside from the ticker code to merge on I also have to merge on the time, I used month as a count variable throughout my time series. The final purpose is to have a regression in the kind of Y(jt) = c + X(jt) +X1(jt) etc with j = company (ticker) and t = time (month). So imagine I have 2 databases, one which is the base database with variables such as Tickers, months, beta's of a company (risk measure) etc and a second database which has an extra variable (let's say market capitalisation). What I want to do then is to merge these 2 databases based on the ticker and the month. Example: Base database: Ticker __ Month __ Betas AA __ 4 __ 1.2 BB __ 8 __ 1.18 Second database: Ticker __ Month __ MCAP AA __ 4 __ 8542 BB __ 6 __ 1245 Then after merge I would like to have something like this: Ticker __ Month _ Betas ___ MCAP AA __ 4 _ 1.2 ___ 8542 So all observations that do not match BOTH date and ticker have to be dropped, I'm sure this is possible, just can't find the right type of code. Thanks! PS: I'm guessing the underscars have something to do with font layout but both the bold as italic is supposed to be normal :)

    Read the article

  • TeamCity with TFS - workspace problems

    - by Tom
    Hi, We have been using CC.NET as our CI server for a month or so now, which has worked ok with TFS. In the config we were able to specify the TFS server, username, password, project and workspace which is all good. Now we are moving over to TeamCity mainly because it just seams more solid and is much nicer to use. The problem is getting it work with TFS. For the purpose of this, both the workspace and machine name are "BuildMachine", username is "BuildUser" TFS project is "$/Project/Dev/Website" I seam to have set it up correctly, I think, as when testing the connection it is successful. When I run a build I get a TFS error: "RunBuildException when running build stage UpdateSourcesFromServer." It goes on to say: "No matched workspaces were found. Will recreate workspace and perofming clean checkout." It then tries to create a new workspace something like this: TeamCity-S-sqa9qe2aulx22gz4rzkogl5kr/BuildUser It tries to set up some mappings and then fails because: "The working folder C:\ is already in use by the workspace BuildMachine;BuildUser on computer BuildMachine". This seams ok as this is the workspace that CC.net was using, and c:\project\dev\website is the path to the project. The problem is, why didn't TeamCity pick this up and use this workspace? Why does it try to create its own new one? Any idea how I can fix this? Thanks

    Read the article

  • Entity Framework 4 / POCO - Where to start?

    - by Basiclife
    Hi, I've been programming for a while and have used LINQ-To-SQL and LINQ-To-Entities before (although when using entities it has been on a Entity/Table 1-1 relationship - ie not much different than L2SQL) I've been doing a lot of reading about Inversion of Control, Unit of Work, POCO and repository patterns and would like to use this methodology in my new applications. Where I'm struggling is finding a clear, concise beginners guide for EF4 which doesn't assume knowledge of EF1. The specific questions I need answered are: Code first / model first? Pros/cons in regards to EF4 (ie what happens if I do code first, change the code at a later date and need to regenerate my DB model - Does the data get preserved and transformed or dropped?) Assuming I'm going code-first (I'd like to see how EF4 converts that to a DB schema) how do I actually get started? Quite often I've seen articles with entity diagrams stating "So this is my entity model, now I'm going to ..." - Unfortunately, I'm unclear if they're created the model in the designer, saved it to generate code then stopped any further auto-code generation -or- They've coded (POCO)? classes and the somehow imported them into the deisgner view? I suppose what I really need is an understanding of where the "magic" comes from and how to add it myself if I'm not just generating an EF model directly from a DB. I'm aware the question is a little vague but I don't know what I don't know - So any input / correction / clarification appreciated. Needless to say, I don't expect anyone to sit here and teach me EF - I'd just like some good tutorials/forums/blogs/etc. for complete entity newbies Many thanks in advance

    Read the article

  • Why can't the 'NonSerialized' attribute be used at the class level? How to prevent serialization of

    - by ck
    I have a data object that is deep-cloned using a binary serialization. This data object supports property changed events, for example, PriceChanged. Let's say I attached a handler to PriceChanged. When the code attempts to serialize PriceChanged, it throws an exception that the handler isn't marked as serializable. My alternatives: I can't easily remove all handlers from the event before serialization I don't want to mark the handler as serializable because I'd have to recursively mark all the handlers dependencies as well. I don't want to mark PriceChanged as NonSerialized - there are tens of events like this that could potentially have handlers. Ideally, I'd like .NET to just stop going down the object graph at that point and make that a 'leaf'. So why can't I just mark the handler class as 'NonSerialized'? -- I finally worked around this problem by making the handler implement ISerializable and doing nothing in the serialize constructor/ GetDataObject method. But, the handler still is serialized, just with all its dependencies set to null - so I had to account for that as well. Is there a better way to prevent serialization of an entire class?

    Read the article

  • Need events to execute on timer events, metronome precision.

    - by user295734
    I setup a timer to call an event in my application. The problme is the event execution is being skewed by other windows operations. Ex. openning and window, loading a webpage. I need the event to excute exactly on time, every time. When i first set up the app, used a sound file, like a metronome to listen to the even firing, in a steady state, its firing right on, but as soon do something in the windows environment, the sound fires slower, then sort of sppeds up a bit to catch up. So i added a logging method to the event to ctahc the timer ticks. From that data, it appears that the timer is not being affected by the windows app, but my application event calls are being affected. I figured this out by checking the datetime.now in the event, and if i set it to 250 milliseconds, which is 4 clicks per second. You get data something like below. (sec):(ms) 1:000 1:250 1:500 1:750 2:000 2:250 2:500 2:750 3:000 3:250 3:500 3:750 (lets say i execute some windows event)(time will skew) 4:122 4:388 4:600 4:876 (stop doing what i was doing in windows) (going to shorten the data for simplicit, my list was 30sec long) 5:124 5:268 5:500 5:750 (you would se the time go back the same milliseconds it was at the begining) 6:000 6:250 6:500 6:750 7:000 7:250 7:500 7:750 So i'm thinking the timer continues to fire on the same millisecond every time, but its the event that is being skewed to fire off time by other windows operations. Its not a huge skew, but for what i need to accomplish, its unacceptable. Is there anyhting i can do in .NET, hoping to use XAML/WPF application, thats will correct the skewing of events? thx.

    Read the article

  • How do I determine if a terminal is color-capable?

    - by asjo
    I would like to change a program to automatically detect whether a terminal is color-capable or not, so when I run said program from within a non-color capable terminal (say M-x shell in (X)Emacs), color is automatically turned off. I don't want to hardcode the program to detect TERM={emacs,dumb}. I am thinking that termcap/terminfo should be able to help with this, but so far I've only managed to cobble together this (n)curses-using snippet of code, which fails badly when it can't find the terminal: #include <stdlib.h> #include <curses.h> int main(void) { int colors=0; initscr(); start_color(); colors=has_colors() ? 1 : 0; endwin(); printf(colors ? "YES\n" : "NO\n"); exit(0); } I.e. I get this: $ gcc -Wall -lncurses -o hep hep.c $ echo $TERM xterm $ ./hep YES $ export TERM=dumb $ ./hep NO $ export TERM=emacs $ ./hep Error opening terminal: emacs. $ which is... suboptimal.

    Read the article

  • How to join a table in symfony (Propel) and retrieve object from both table with one query

    - by Jean-Philippe
    Hi, I'm trying to get an easy way to fetch data from two joined Mysql table using Propel (inside Symfony) but in one query. Let's say I do this simple thing: $comment = CommentPeer::RetrieveByPk(1); print $comment->getArticle()->getTitle(); //Assuming the Article table is joined to the Comment table Symfony will call 2 queries to get that done. The first one to get the Comment row and the next one to get the Article row linked to the comment one. Now, I am trying to find a way to make all that within one query. I've tried to join them using $c = new Criteria(); $c->addJoin(CommentPeer::ARTICLE_ID, ArticlePeer::ID); $c->add(CommentPeer::ID, 1); $comment = CommentPeer::doSelectOne($c); But when I try to get the Article object using $comment->getArticle() It will still issue the query to get the Article row. I could easily clear all the selected columns and select the columns I need but that would not give me the Propel object I'd like, just an array of the query's raw result. So how can I get a populated propel object of two (or more) joined table with only one query? Thanks, JP

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • Performing text processing on flatpage content to include handling of custom tag

    - by Dzejkob
    Hi. I'm using flatpages app in my project to manage some html content. That content will include images, so I've made a ContentImage model allowing user to upload images using admin panel. The user should then be able to include those images in content of the flatpages. He can of course do that by manually typing image url into <img> tag, but that's not what I'm looking for. To make including images more convenient, I'm thinking about something like this: User edits an additional, let's say pre_content field of CustomFlatPage model (I'm using custom flatpage model already) instead of defining <img> tags directly, he uses a custom tag, something like [img=...] where ... is name of the ContentImage instance now the hardest part: before CustomFlatPage is saved, pre_content field is checked for all [img=...] occurences and they are processed like this: ContentImage model is searched if there's image instance with given name and if so, [img=...] is replaced with proper <img> tag. flatpage actual content is filled with processed pre_content and then flatpage is saved (pre_content is leaved unchanged, as edited by user) The part that I can't cope with is text processing. Should I use regular expressions? Apparently they can be slow for large strings. And how to organize logic? I assume it's rather algorithmic question, but I'm not familliar with text processing in Python enough, to do it myself. Can somebody give me any clues?

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • Converting Makefile to Visual Studio Terminology Questions (First time using VS)

    - by Ukko
    I am an old Unix guy who is converting a makefile based project over to Microsoft Visual Studio, I got tasked with this because I understand the Makefile which chokes VS's automatic import tools. I am sure there is a better way than what I am doing but we are making things fit into the customer's environment and that is driving my choices. So gmake is not a valid answer, even if it is the right one ;-) I just have a couple of questions on terminology that I think will be easy for an experienced (or junior) user to answer. Presently, a make all will generate several executables and a shared library. How should I structure this? Is it one "solution" with multiple projects? There is a body of common code (say 50%) that is shared between the various executable targets that is not in a formal library, if that matters. I thought I could just set up the first executable and then add targets for the others, but that does not seem to work. I know I am working against the tool, so what is the right way? I am also using Visual C++ 2010 Express to try and do this so that may also be a problem if support for multiple targets is not supported without using Visual C++ 2010 (insert superlative). Thanks, this is really one of those questions that should be answerable by a quick chat with the resident Windows Developer at the water cooler. So, I am asking at the virtual water cooler, I also spring for a virtual frosty beverage after work.

    Read the article

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • Altering lazy-loaded object's private variables

    - by Kevin Pang
    I'm running into an issue with private setters when using NHibernate and lazy-loading. Let's say I have a class that looks like this: public class User { public int Foo {get; private set;} public IList<User> Friends {get; set;} public void SetFirstFriendsFoo() { // This line works in a unit test but does nothing during a live run with // a lazy-loaded Friends list Users(0).Foo = 1; } } The SetFirstFriendsFoo call works perfectly inside a unit test (as it should since objects of the same type can access each others private properties). However, when running live with a lazy-loaded Friends list, the SetFirstFriendsFoo call silently fails. I'm guessing the reason for this is because at run-time, the Users(0).Foo object is no longer of type User, but of a proxy class that inherits from User since the Friends list was lazy-loaded. My question is this: shouldn't this generate a run-time exception? You get compile-time exceptions if you try to access another class's private properties, but when you run into a situation like this is looks like the app just ignores you and continues along its way.

    Read the article

  • MySQL nested set hierarchy with foreign table

    - by Björn
    Hi! I'm using a nested set in a MySQL table to describe a hierarchy of categories, and an additional table describing products. Category table; id name left right Products table; id categoryId name How can I retrieve the full path, containing all parent categories, of a product? I.e.: RootCategory > SubCategory 1 > SubCategory 2 > ... > SubCategory n > Product Say for example that I want to list all products from SubCategory1 and it's sub categories, and with each given Product I want the full tree path to that product - is this possible? This is as far as I've got - but the structure is not quite right... select parent.`name` as name, parent.`id` as id, group_concat(parent.`name` separator '/') as path from categories as node, categories as parent, (select inode.`id` as id, inode.`name` as name from categories as inode, categories as iparent where inode.`lft` between iparent.`lft` and iparent.`rgt` and iparent.`id`=4 /* The category from which to list products */ order by inode.`lft`) as sub where node.`lft` between parent.`lft` and parent.`rgt` and node.`id`=sub.`id` group by sub.`id` order by node.`lft`

    Read the article

  • Extension methods conflict

    - by Yochai Timmer
    Lets say I have 2 extension methods to string, in 2 different namespaces: namespace test1 { public static class MyExtensions { public static int TestMethod(this String str) { return 1; } } } namespace test2 { public static class MyExtensions2 { public static int TestMethod(this String str) { return 2; } } } These methods are just for example, they don't really do anything. Now lets consider this piece of code: using System; using test1; using test2; namespace blah { public static class Blah { public Blah() { string a = "test"; int i = a.TestMethod(); //Which one is chosen ? } } } I know that only one of the extension methods will be chosen. Which one will it be ? and why ? How can I choose a certain method from a certain namespace ? Edit: Usually I'd use Namespace.ClassNAME.Method() ... But that just beats the whole idea of extension methods. And I don't think you can use Variable.Namespace.Method()

    Read the article

  • nested include in php

    - by aeonsleo
    The directory structure: C:/wamp/www/application/model/data_access/data_object.php C:/wamp/www/application/model/users/user.class.php C:/wamp/www/application/controller/projects.php C:/wamp/www/application/controller/links/links.php I have 2 php files data_object.php and user.class.php Now user.class.php has an include statement for data_object.php wchih is relative to user.class.php.These two files are under different directory hierarchy. Now I have to include this user.class.php in various files (like projects.php, links.php-which themselves are under different hierarchy) whenever i want to create a User() object. The problem is the relative path for file inclusion of data_object.php does work for say projects.php but if i open links.php the error message says it could not open file data_object.php in user.class.php. What i think is for relative inclusion of data_object.php it is considering the path of the file in which user.class.php is included. I am facing such problems in more than one scenarios I have to keep my directory structure the way it is but have to find a way to work with nested includes. I used Document root of session it give root path as C:/wamp/www/ i appended the path for data_object.php include but this is not working. (note: the forward slash is present after www) I am currently running on wamp server's localhost but after completion i have to host the solution on a domain. Pls help

    Read the article

  • Why does autoboxing in Java allow me to have 3 possible values for a boolean?

    - by John
    Reference: http://java.sun.com/j2se/1.5.0/docs/guide/language/autoboxing.html If your program tries to autounbox null, it will throw a NullPointerException. javac will give you a compile-time error if you try to assign null to a boolean. makes sense. assigning null to a Boolean is a-ok though. also makes sense, i guess. but let's think about the fact that you'll get a NPE when trying to autounbox null. what this means is that you can't safely perform boolean operations on Booleans without null-checking or exception handling. same goes for doing math operations on an Integer. for a long time, i was a fan of autoboxing in java1.5+ because I thought it got java closer to be truly object-oriented. but, after running into this problem last night, i gotta say that i think this sucks. the compiler giving me an error when I'm trying to do stuff with an uninitialized primitive is a good thing. I think I may be misunderstanding the point of autoboxing, but at the same time I will never accept that a boolean should be able to have 3 values. can anyone explain this? what am i not getting?

    Read the article

  • Calling a method on an unitialized object (null pointer)

    - by Florin
    What is the normal behavior in Objective-C if you call a method on an object (pointer) that is nil (maybe because someone forgot to initialize it)? Shouldn't it generate some kind of an error (segmentation fault, null pointer exception...)? If this is normal behavior, is there a way of changing this behavior (by configuring the compiler) so that the program raises some kind of error / exception at runtime? To make it more clear what I am talking about, here's an example. Having this class: @interface Person : NSObject { NSString *name; } @property (nonatomic, retain) NSString *name; - (void)sayHi; @end with this implementation: @implementation Person @synthesize name; - (void)dealloc { [name release]; [super dealloc]; } - (void)sayHi { NSLog(@"Hello"); NSLog(@"My name is %@.", name); } @end Somewhere in the program I do this: Person *person = nil; //person = [[Person alloc] init]; // let's say I comment this line person.name = @"Mike"; // shouldn't I get an error here? [person sayHi]; // and here [person release]; // and here

    Read the article

  • Java JSP/Servlet: controller servlet throwing the famous stack overflow

    - by NoozNooz42
    I've read several docs and I don't get it: I know I'm doing something wrong but I don't understand what. I've got a website that is entirely dynamically generated: there's hardly any static content at all. So, trying to understand JSP/Servlet, I've written my own "front controller" intercepting every single query, it looks like this: <servlet-mapping> <servlet-name>defaultservlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> Basically I want any user request, like: example.org example.org/bar example.org/foo.html to all go through a default servlet which I've written. The servlet then examines the URI and find to which .jsp the request must be dispatched, and then does, after having set all the attributes correctly, a: RequestDispatcher dispatcher = getServletContext().getRequestDispatcher("/WEB-INF/jsp/index.jsp"); dispatcher.forward(req, resp); When I'm using a url-pattern (in web.xml) like, say, *.html, everything works fine. But when I change it to /* (to really intercept everything), I enter an endless loop and it ends up with a... StackOverflow :) When the request is dispatched, is the URI ".../WEB-INF/jsp/index.jsp" itself matched by the web.xml filter /* that I set? How should I do if I want to intercept everything using a /* url-pattern and yet be able to dispatch/forward/? I'm not asking about specs/Javadocs here: I'm really confused about the bigger picture and I'd need some explanation as to what could be going on. Am I not supposed to intercept really everything? If I can intercept everything, what should I be aware of regarding forwarding/dispatching?

    Read the article

  • how do i filter my lucene search results?

    - by Andrew Bullock
    Say my requirement is "search for all users by name, who are over 18" If i were using SQL, i might write something like: Select * from [Users] Where ([firstname] like '%' + @searchTerm + '%' OR [lastname] like '%' + @searchTerm + '%') AND [age] >= 18 However, im having difficulty translating this into lucene.net. This is what i have so far: var parser = new MultiFieldQueryParser({ "firstname", "lastname"}, new StandardAnalyser()); var luceneQuery = parser.Parse(searchterm) var query = FullTextSession.CreateFullTextQuery(luceneQuery, typeof(User)); var results = query.List<User>(); How do i add in the "where age = 18" bit? I've heard about .SetFilter(), but this only accepts LuceneQueries, and not IQueries. If SetFilter is the right thing to use, how do I make the appropriate filter? If not, what do I use and how do i do it? Thanks! P.S. This is a vastly simplified version of what I'm trying to do for clarity, my WHERE clause is actually a lot more complicated than shown here. In reality i need to check if ids exist in subqueries and check a number of unindexed properties. Any solutions given need to support this. Thanks

    Read the article

  • The algorithm used to generate recommendations in Google News?

    - by Siddhant
    Hi everyone. I'm study recommendation engines, and I went through the paper that defines how Google News generates recommendations to users for news items which might be of their interest, based on collaborative filtering. One interesting technique that they mention is Minhashing. I went through what it does, but I'm pretty sure that what I have is a fuzzy idea and there is a strong chance that I'm wrong. The following is what I could make out of it :- Collect a set of all news items. Define a hash function for a user. This hash function returns the index of the first item from the news items which this user viewed, in the list of all news items. Collect, say "n" number of such values, and represent a user with this list of values. Based on the similarity count between these lists, we can calculate the similarity between users as the number of common items. This reduces the number of comparisons a lot. Based on these similarity measures, group users into different clusters. This is just what I think it might be. In Step 2, instead of defining a constant hash function, it might be possible that we vary the hash function in a way that it returns the index of a different element. So one hash function could return the index of the first element from the user's list, another hash function could return the index of the second element from the user's list, and so on. So the nature of the hash function satisfying the minwise independent permutations condition, this does sound like a possible approach. Could anyone please confirm if what I think is correct? Or the minhashing portion of Google News Recommendations, functions in some other way? I'm new to internal implementations of recommendations. Any help is appreciated a lot. Thanks!

    Read the article

  • I write barely functional scripts that tend to not be resuable and make the baby jesus cry. Please h

    - by maxxpower
    I received a request to add around 100 users to a linux box the users are already in ldap so I can't just use newusers and point it at a text file. Another admin is taking care of the ldap piece so all I have to do is create all the home directories and chown them to the correct user once he adds the users to the box. creating the directories isn't a problem, but I'd like a more elegant script for chowning them to the correct user. what I have currently basically looks like chown -R testuser1 testgroup1 /home/tetsuser1; chown -R testuser2 testgroup2 /home/testgroup2; chown -R testsuser3 testgroup1 /home/testuser3 bascially I took the request that the user name and group name popped it into excel added a column of "chown -R" to the front, then added a column of "/", copied and pasted the username column after it and then added a column of ";" and dragged it down to the second to last row. Popped it into notepad ran some quick find and replaces and in less than a minute I have a completed request and a sad empty feeling. I know this was a really ghetto method and I'm trying to get away from using excel to avoid learning new scripting techniques so here's my real question. tl;dr I made 100 home directories and chowned them to the correct users, but it was ugly. Actual question below. You have a file named idlist that looks like this (only with say 1000 users and real usernames and groups) testuser1 testgroup1 testuser2 testgroup2 testuser3 testgroup1 write a script that creates home directories for all the users and chowns the created directories to the correct user and group. To make the directories I used the following(feel free to flame/correct me on this as well. ) var= 'cut -f1 -d" " idlist' (I used backticks not apostrophes around the cut command) mkdir $var

    Read the article

  • ANTLR Tree Grammar and StringTemplate Code Translation

    - by Behrooz Nobakht
    I am working on a code translation project with a sample ANTLR tree grammar as: start: ^(PROGRAM declaration+) -> program_decl_tmpl(); declaration: class_decl | interface_decl; class_decl: ^(CLASS ^(ID CLASS_IDENTIFIER)) -> class_decl_tmpl(cid={$CLASS_IDENTIFIER.text}); The group template file for it looks like: group My; program_decl_tmpl() ::= << *WHAT?* >> class_decl_tmpl(cid) ::= << public class <cid> {} >> Based on this, I have these questions: Everything works fine apart from that what I should express in WHAT? to say that a program is simply a list of class declarations to get the final generated output? Is this approach averagely suitable for not so a high-level language? I have also studied ANTLR Code Translation with String Templates, but it seems that this approach takes much advantage of interleaving code in tree grammar. Is it also possible to do it as much as possible just in String Templates?

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >