Search Results

Search found 2724 results on 109 pages for 'spam filtering'.

Page 92/109 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • Selective Suppression of Log Messages

    - by Duncan Mills
    Those of you who regularly read this blog will probably have noticed that I have a strange predilection for logging related topics, so why break this habit I ask?  Anyway here's an issue which came up recently that I thought was a good one to mention in a brief post.  The scenario really applies to production applications where you are seeing entries in the log files which are harmless, you know why they are there and are happy to ignore them, but at the same time you either can't or don't want to risk changing the deployed code to "fix" it to remove the underlying cause. (I'm not judging here). The good news is that the logging mechanism provides a filtering capability which can be applied to a particular logger to selectively "let a message through" or suppress it. This is the technique outlined below. First Create Your Filter  You create a logging filter by implementing the java.util.logging.Filter interface. This is a very simple interface and basically defines one method isLoggable() which simply has to return a boolean value. A return of false will suppress that particular log message and not pass it onto the handler. The method is passed the log record of type java.util.logging.LogRecord which provides you with access to everything you need to decide if you want to let this log message pass through or not, for example  getLoggerName(), getMessage() and so on. So an example implementation might look like this if we wanted to filter out all the log messages that start with the string "DEBUG" when the logging level is not set to FINEST:  public class MyLoggingFilter implements Filter {     public boolean isLoggable(LogRecord record) {         if ( !record.getLevel().equals(Level.FINEST) && record.getMessage().startsWith("DEBUG")){          return false;            }         return true;     } } Deploying   This code needs to be put into a JAR and added to your WebLogic classpath.  It's too late to load it as part of an application, so instead you need to put the JAR file into the WebLogic classpath using a mechanism such as the PRE_CLASSPATH setting in your domain setDomainEnv script. Then restart WLS of course. Using The final piece if to actually assign the filter.  The simplest way to do this is to add the filter attribute to the logger definition in the logging.xml file. For example, you may choose to define a logger for a specific class that is raising these messages and only apply the filter in that case.  <logger name="some.vendor.adf.ClassICantChange"         filter="oracle.demo.MyLoggingFilter"/> You can also apply the filter using WLST if you want a more script-y solution.

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • How to get jQuery EasyUI datagrid and treegrid to load in background?

    - by f1r3br4nd
    I am trying build a jQuery EasyUI datagrid or treegrid out of a large query. Apparently the database takes long enough to respond that I get the "a script on this page may be busy" popup. Moreover, the entire browser (Firefox) locks up while it's waiting. I thought the whole point of AJAX was to load stuff unobtrusively. I've looked through the tutorials and documentation for EasyUI but it's not clear to me how to force a datagrid to load in the background. There are some similar unanswered questions asked on the EasyUI forums. Do I need to override the loader property of datagrid? If so, does anybody know where I can get a non-obfuscated version of the default loader function so I can be sure I understand what it's supposed to do before I write my own? Also, if I need asynchronous datagrids and datatrees with sorting and filtering features, is jQuery EasyUI the wrong library for doing this simply and cleanly? Is there some alternative jQuery library people would recommend? Thank you.

    Read the article

  • DevExpress AspxGridView clientside SelectionChanged problem when using paged ObjectDataSource

    - by Constantin Baciu
    The context is as follows: One DexExpress AspxGridView with a server-side paging/filtering/sorting mechanism (using ObjectDataSource). I've been having problems with the filter mechanism ( see this stack ). Now, the problem I'm having is this: the client-side events get mangled between DataSource events. :O . Let me explain what happens: if I change the page (or sort/filter for that matter), then, select one row from the grid, the client-side SelectionChanged event fires well. If I change the page (or sort/filter), the event doesn't fire anymore. Instead, on the server side, I get a "The method or operation is not implemented" exception with the following stack-trace: at DevExpress.Web.Data.WebDataProviderBase.GetListSouceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetListSourceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetKeyValueCore(Int32 index, GetKeyValueCallback getKeyValue) at DevExpress.Web.Data.WebDataSelectionBase.GetSelectedValues(String[] fieldNames, Int32 visibleStartIndex, Int32 visibleRowCountOnPage) at DevExpress.Web.Data.WebDataProxy.GetSelectedValues(String[] fieldNames) at DevExpress.Web.ASPxGridView.ASPxGridView.FBSelectFieldValues(String[] args) at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResultCore() at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResult() at DevExpress.Web.ASPxClasses.ASPxWebControl.System.Web.UI.ICallbackEventHandler.GetCallbackResult() Am I doing something wrong? Any help will be much appreciated.

    Read the article

  • DevExpress AspxGridView clientside SelectionChanged problem when using paged ObjectDataSource

    - by Constantin Baciu
    The context is as follows: One DexExpress AspxGridView with a server-side paging/filtering/sorting mechanism (using ObjectDataSource). I've been having problems with the filter mechanism ( see this stack ). Now, the problem I'm having is this: the client-side events get mangled between DataSource events. :O . Let me explain what happens: if I change the page (or sort/filter for that matter), then, select one row from the grid, the client-side SelectionChanged event fires well. If I change the page (or sort/filter), the event doesn't fire anymore. Instead, on the server side, I get a "The method or operation is not implemented" exception with the following stack-trace: at DevExpress.Web.Data.WebDataProviderBase.GetListSouceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetListSourceRowValue(Int32 listSourceRowIndex, String fieldName) at DevExpress.Web.Data.WebDataProxy.GetKeyValueCore(Int32 index, GetKeyValueCallback getKeyValue) at DevExpress.Web.Data.WebDataSelectionBase.GetSelectedValues(String[] fieldNames, Int32 visibleStartIndex, Int32 visibleRowCountOnPage) at DevExpress.Web.Data.WebDataProxy.GetSelectedValues(String[] fieldNames) at DevExpress.Web.ASPxGridView.ASPxGridView.FBSelectFieldValues(String[] args) at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResultCore() at DevExpress.Web.ASPxGridView.ASPxGridView.GetCallbackResult() at DevExpress.Web.ASPxClasses.ASPxWebControl.System.Web.UI.ICallbackEventHandler.GetCallbackResult() Am I doing something wrong? Any help will be much appreciated.

    Read the article

  • LINQ - IEnumerable.Join on Anonymous Result Set in VB.NET

    - by user337501
    I've long since built a way around this, but it still keeps bugging me... it doesnt help that my grasp of dynamic LINQ queries is still shakey. For the example: Parent has fields (ParentKey, ParentField) Child has fields (ChildKey, ParentKey, ChildField) Pet has fields (PetKey, ChildKey, PetField) Child has a foreign key reference to Parent on Child.ParentKey = Parent.ParentKey Pet has a foreign key reference to Child on Pet.Childkey = Child.ChildKey Simple enough eh? Lets say I have LINQ like this... Dim Q = FROM p in DataContext.Parent _ Join c In DataContext.Child On c.ParentKey = p.ParentKey Consider this a "base query" on which I will perform other filtering actions. Now I want to join the Pet table like this: Q = Q.Join(DataContext.Pet, _ Function(a) a.c.ChildKey, _ Function(p As Pet) p.ChildKey, _ Function(a, p As Pet) p.ChildKey = a.c.ChildKey) The above Join call doesnt work. I sort of understand why it doesnt work, but hopefully it'll show you how I tried to accomplish this task. After all this was done I would have appended a Select to finish the job. Any ideas on a better way to do this? I tried it with the PredicateBuilder with little success. I might not know how to use it right but it felt like it wasnt gonna handle the joining.

    Read the article

  • Need help replicating directory structure with ColdFusion and jsTree

    - by Derek
    I am using this new jQuery plugin called jsTree www.jstree.com and using the HTML datasource. I am also using ColdFusion 7 with cfdirectory and filtering out files, so just dirs. I need to recreate the directory structure in the image, well any dir structure I give it actually. I am having a heck of a time with the logic. variables.imageDirectoriesLen = 8 in this scenario cause I am outputting from the middle of the actual file path, not from begining. Thanks for the help. Derek this is what I have so far <cfoutput query="clientImageDirsFilter"> <cfset nextLen = 0 /> <cfset nextDir = "" /> <cfset nextRowCnt = currentRow+1 /> <cfset nextDir = clientImageDirsFilter.directory[nextRowCnt] & "\" & clientImageDirsFilter.name[nextRowCnt] /> <cfset nextLen = listLen(nextDir, "\") /> <cfset currLen = listLen(clientImageDirsFilter.directory & "\" & clientImageDirsFilter.name,"\") /> <cfif currLen eq nextLen> <li rel="folder" id="node_#randRange(1,99999)#"><a href="##"><ins>&nbsp;</ins>#clientImageDirsFilter.name#</a></li> <cfelseif nextLen lt currLen> <cfif nextLen eq 0> #repeatString("</li></ul>",(currLen-nextLen-variables.imageDirectoriesLen))# </cfif> <cfelse> <ul> <li rel="folder" id="node_#randRange(1,99999)#"><a href="##"><ins>&nbsp;</ins>#clientImageDirsFilter.name#</a> <ul> </cfif>

    Read the article

  • LINQ to Entities pulling back entire table

    - by Focus
    In my application I'm pulling back a user's "feed". This contains all of that user's activities, events, friend requests from other users, etc. When I pull back the feed I'm calling various functions to filter the request along the way. var userFeed = GetFeed(db); // Query to pull back all data userFeed = FilterByUser(userFeed, user, db); // Filter for the user userFeed = SortFeed(userFeed, page, sortBy, typeName); // Sort it The data that is returned is exactly what I need, however when I look at a SQL Profile Trace I can see that the query that is getting this data does not filter it at the database level and instead is selecting ALL data in the table(s). This query does not execute until I iterate through the results on my view. All of these functions return an IEnumerable object. I was under the impression that LINQ would take all of my filters and form one query to pull back the data I want instead of pulling back all the data and then filtering it on the server. What am I doing wrong or what don't I understand about the way LINQ evaluates queries?

    Read the article

  • Embeddable forum software

    - by Rented
    I am in the planning stages of a specific subject matter community web site, and one feature I feel is required is that of member discussions. However, not in a typical forum style. For example, I don't want the members to have to navigate away from their own "user space" in order to discuss a topic. I think it is best described with an analogous example. Lets say the site is for literature buffs, and each member has a set of pages for keeping notes, progress, questions, etc. on books they are studying/reading. So Joe will have one page for Great Expectations, another for Hamlet, a third for I, Robot, and so forth. Likewise, Jane will have a page for Don Quixote, Lord of the Flies, and also I, Robot. Now, wouldn't it be nice if Joe and Jane could discuss I, Robot from within their own respective pages? Now, at first thought, roll your own seems like the way to go. However, once we start getting into issues such as spam blocking, banning, ratings, pruning, archiving, flooding and so on, well "roll your own" doesn't sound too appealing anymore. Also, I have next to zero experience with forum software. So I'm looking for forum software that has an extensive API or is generally very integration-friendly. I would like to be able to create user groups, topics, permissions, etc. programmatically,as well as the obvious user authentication (most seem open in that respect). The site will most probably be built with Java. Tangler seems like a descent option, but it seems less mature than what I'd prefer.

    Read the article

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • Median Filter a bi-level image with JAI

    - by Mark
    I'd like to apply a Median Filter to a bi-level image and output a bi-level image. The JAI median filter seems to output an RGB image, which I'm having trouble downconverting back to bi-level. Currently I can't even get the image back into gray color-space, my code looks like this: BufferedImage src; // contains a bi-level image ParameterBlock pb = new ParameterBlock(); pb.addSource(src); pb.add(MedianFilterDescriptor.MEDIAN_MASK_SQUARE); pb.add(3); RenderedOp result = JAI.create("MedianFilter", pb); ParameterBlock pb2 = new ParameterBlock(); pb2.addSource(result); pb2.add(new double[][]{{0.33, 0.34, 0.33, 0}}); RenderedOp grayResult = JAI.create("BandCombine", pb2); BufferedImage foo = grayResult.getAsBufferedImage(); This code hangs on the grayResult line and appears not to return. I assume that I'll eventually need to call the "Binarize" operation in JAI. Edit: Actually, the code appears to be stalling once I call getAsBufferedImage(), but returns nearly instantly when the second operation ("BandCombine") is removed. Is there a better way to keep the Median Filtering in the source color domain? If not, how do I downconvert back to binary?

    Read the article

  • Change Edit Control Block (ECB) Link URL in SharePoint

    - by dirq
    Is there a way to dynamically change the hyperlink associated with an ECB menu in WSS 3.0? For instance, I have a list with 2 fields. One field is hidden and is a link, the other is the title field which has the ECB menu. The title field currently links to the item's view page - but we want it to link to the link-field's url. Is that possible? UPDATE - 5/29/09 9AM I have this so far. See this TechNet post. <script type="text/javascript"> var url = 'GoTo.aspx?ListTitle='+ctx.ListTitle; url += '&ListName='+ctx.listName; url += '&ListTemplate='+ctx.listTemplate; url += '&listBaseType='+ctx.listBaseType; url += '&view='+ctx.view; url += '&'; var a = document.getElementsByTagName('a'); for(i=0;i<=a.length -1;i++) { a[i].href=a[i].href.replace('DispForm.aspx?',url); } </script> This gives me a link like so (formatted so it's easier to see): GoTo.aspx ?ListTitle=MyList &ListName={082BB11C-1941-4906-AAE9-5F2EBFBF052B} &ListTemplate=100 &listBaseType=0 &view={9ABE2B07-2B47-4390-9969-258F00E0812C} &ID=1 My issue now is that the row in the grid gives each item the ID property above but if I change the view or do any filtering you can see that the ID is really just the row number. Can I get the actual item's GUID here? If I can get the item's ID I can send it with the list ID to an application page that will get the right URL from field in the list and forward the user on to the right site.

    Read the article

  • Iterating through a JSON object.

    - by user327508
    [ { "title": "Baby (Feat. Ludacris) - Justin Bieber", "description": "Baby (Feat. Ludacris) by Justin Bieber on Grooveshark", "link": "http://listen.grooveshark.com/s/Baby+Feat+Ludacris+/2Bqvdq", "pubDate": "Wed, 28 Apr 2010 02:37:53 -0400", "pubTime": 1272436673, "TinyLink": "http://tinysong.com/d3wI", "SongID": "24447862", "SongName": "Baby (Feat. Ludacris)", "ArtistID": "1118876", "ArtistName": "Justin Bieber", "AlbumID": "4104002", "AlbumName": "My World (Part II);\nhttp://tinysong.com/gQsw", "LongLink": "11578982", "GroovesharkLink": "11578982", "Link": "http://tinysong.com/d3wI" }, { "title": "Feel Good Inc - Gorillaz", "description": "Feel Good Inc by Gorillaz on Grooveshark", "link": "http://listen.grooveshark.com/s/Feel+Good+Inc/1UksmI", "pubDate": "Wed, 28 Apr 2010 02:25:30 -0400", "pubTime": 1272435930 } ] That is the current JSON object I have. I am now trying to iterate through it to get the import stuff like title and link. This is where I am having trouble I cant seem to get to the content that is past the ":" i tried doing dictionary way couldn't get it. def getLastSong(user,limit): base_url = 'http://gsuser.com/lastSong/' user_url = base_url + str(user) + '/' + str(limit) + "/" raw = urllib.urlopen(user_url) json_raw= raw.readlines() json_object = json.loads(json_raw[0]) #filtering and making it look good. gsongs = [] print json_object for song in json_object[0]: print song This code prints all the information before ":" Please help. ignore the Justin Bieber track :)

    Read the article

  • Help with JPQL query

    - by Robert
    I have to query a Message that is in a provided list of Groups and has not been Deactivated by the current user. Here is some pseudo code to illustrate the properties and entities: class Message { private int messageId; private String messageText; } class Group { private String groupId; private int messageId; } class Deactivated { private String userId; private int messageId; } Here is an idea of what I need to query for, it's the last AND clause that I don't know how to do (I made up the compound NOT IN expression). Filtering the deactivated messages by userId can result in multiple messageIds, how can I check if that subset of rows does not contain the messageId? SELECT msg FROM Message msg, Group group, Deactivated unactive WHERE group.messageId = msg.messageId AND (group.groupId = 'groupA' OR group.groupId = 'groupB' OR ...) AND ('someUserId', msg.messageId) NOT IN (unactive.userId, unactive.messageId) I don't know the number of groupIds ahead of time -- I receive them as a Collection<String> so I'll need to traverse them and add them to the JPQL dynamically.

    Read the article

  • How to control Dojo FilteringSelect default rendering!?

    - by Nick
    Hi, I have a dojo dijit.filering.select that populates with values from a dojo.data.ItemFileReadStore. Everything is working fine except I would like the filtering select to automatically get populated with the first value in the itemFileReadStore. Currently it is loading them as a list of options that are revealed when you click the down arrow, as per spec. I would instead like filteringSelect to be loaded with the first value. How do I do this? For some reason I cant figure it out. Any help would be hugely appreciated! Kind Regards Nick Frandsen <script type="text/javascript"> function updateOptions(){ var itemId = dijit.byId("item_select").attr("value"); var jsonStore = new dojo.data.ItemFileReadStore({ url: "/options/get-options-json/itemId/" + itemId }); optionSelect.attr("store", jsonStore); } </script> <select dojoType="dijit.form.FilteringSelect" name="option_select" id="option_select" labelAttr="name" required="true" jsId="optionSelect"> </select>

    Read the article

  • ASP.NET MVC, Url Routing: Maximum Path (URL) Length

    - by Martin Aatmaa
    The Scenario I have an application where we took the good old query string URL structure: ?x=1&y=2&z=3&a=4&b=5&c=6 and changed it into a path structure: /x/1/y/2/z/3/a/4/b/5/c/6 We're using ASP.NET MVC and (naturally) ASP.NET routing. The Problem The problem is that our parameters are dynamic, and there is (theoretically) no limit to the amount of parameters that we need to accommodate for. This is all fine until we got hit by the following train: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. IIS would throw this error when our URL got past a certain length. The Nitty Gritty Here's what we found out: This is not an IIS problem IIS does have a max path length limit, but the above error is not this. Learn dot iis dot net How to Use Request Filtering Section "Filter Based on Request Limits" If the path was too long for IIS, it would throw a 404.14, not a 400.0. Besides, the IIS max path (and query) length are configurable: <requestLimits maxAllowedContentLength="30000000" maxUrl="260" maxQueryString="25" /> This is an ASP.NET Problem After some poking around: IIS Forums Thread: ASP.NET 2.0 maximum URL length? http://forums.iis.net/t/1105360.aspx it turns out that this is an ASP.NET (well, .NET really) problem. The shit of the matter is that, as far as I can tell, ASP.NET cannot handle paths longer than 260 characters. The nail in the coffin in that this is confirmed by Phil the Haack himself: Stack Overflow ASP.NET url MAX_PATH limit Question ID 265251 The Question So what's the question? The question is, how big of a limitation is this? For my app, it's a deal killer. For most apps, it's probably a non-issue. What about disclosure? No where where ASP.NET Routing is mentioned have I ever heard a peep about this limitation. The fact that ASP.NET MVC uses ASP.NET routing makes the impact of this even bigger. What do you think?

    Read the article

  • JPA merge fails due to duplicate key

    - by wobblycogs
    I have a simple entity, Code, that I need to persist to a MySQL database. public class Code implements Serializable { @Id private String key; private String description; ...getters and setters... } The user supplies a file full of key, description pairs which I read, convert to Code objects and then insert in a single transaction using em.merge(code). The file will generally have duplicate entries which I deal with by first adding them to a map keyed on the key field as I read them in. A problem arises though when keys differ only by case (for example: XYZ and XyZ). My map will, of course, contain both entries but during the merge process MySQL sees the two keys as being the same and the call to merge fails with a MySQLIntegrityConstraintViolationException. I could easily fix this by uppercasing the keys as I read them in but I'd like to understand exactly what is going wrong. The conclusion I have come to is that JPA considers XYZ and XyZ to be different keys but MySQL considers them to be the same. As such when JPA checks its list of known keys (or does whatever it does to determine whether it needs to perform an insert or update) it fails to find the previous insert and issuing another which then fails. Is this corrent? Is there anyway round this other than better filtering the client data? I haven't defined .equals or .hashCode on the Code class so perhaps this is the problem.

    Read the article

  • Linq - Orderby not ordering

    - by Billy Logan
    Hello everyone, I have a linq query that for whatever reason is not coming back ordered as i would expect it to. Can anyone point me in the right direction as to why and what i am doing wrong? Code is as follows: List<TBLDESIGNER> designer = null; using (SOAE strikeOffContext = new SOAE()) { //Invoke the query designer = AdminDelegates.selectDesignerDesigns.Invoke(strikeOffContext).ByActive(active).ByAdmin(admin).ToList(); } Delegate: public static Func<SOAE, IQueryable<TBLDESIGNER>> selectDesignerDesigns = CompiledQuery.Compile<SOAE, IQueryable<TBLDESIGNER>>( (designer) => from c in designer.TBLDESIGNER.Include("TBLDESIGN") orderby c.FIRST_NAME ascending select c); Filter ByActive: public static IQueryable<TBLDESIGNER> ByActive(this IQueryable<TBLDESIGNER> qry, bool active) { //Return the filtered IQueryable object return from c in qry where c.ACTIVE == active select c; } Filter ByAdmin: public static IQueryable<TBLDESIGNER> ByAdmin(this IQueryable<TBLDESIGNER> qry, bool admin) { //Return the filtered IQueryable object return from c in qry where c.SITE_ADMIN == admin select c; } Wondering if the filtering has anything to do with it?? Thanks in advance, Billy

    Read the article

  • What should be the responsibility of a presenter here?

    - by Achu
    I have a 3 layer design. (UI / BLL / DAL) UI = ASP.NET MVC In my view I have collection of products for a category. Example: Product 1, Product 2 etc.. A user able to select or remove (by selecting check box) product’s from the view, finally save as a collection when user submit these changes. With this 3 layer design how this product collection will be saved? How the filtering of products (removal and addition) to the category object? Here are my options. (A) It is the responsibility of the controller then the pseudo Code would be Find products that the user selected or removed and compare with existing records. Add or delete that collection to category object. Call SaveCategory(category); // BLL CALL Here the first 2 process steps occurs in the controller. (B) It is the responsibility of BLL then pseudo Code would be Collect products what ever user selected SaveCategory(category, products); // BLL CALL Here it's up to the SaveCategory (BLL) to decide what products should be removed and added to the database. Thanks

    Read the article

  • why javamail fails with an authentication Exception ?

    - by saravana
    package com.bcs; import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class SendMailTLS { public static void main(String[] args) { String host = "smtp.gmail.com"; int port = 587; String username = "[email protected]"; String password = "bar"; Properties props = new Properties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); Session session = Session.getInstance(props); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport transport = session.getTransport("smtp"); transport.connect(host, port, username, password); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } } I have given all the necessary inputs . But still it fails with Exception in thread "main" java.lang.RuntimeException: javax.mail.AuthenticationFailedException: failed to connect, no password specified? at com.bcs.SendMailTLS.main(SendMailTLS.java:43) Caused by: javax.mail.AuthenticationFailedException: failed to connect, no password specified? at javax.mail.Service.connect(Service.java:329) at javax.mail.Service.connect(Service.java:176) at javax.mail.Service.connect(Service.java:125) at javax.mail.Transport.send0(Transport.java:194) at javax.mail.Transport.send(Transport.java:124) at com.bcs.SendMailTLS.main(SendMailTLS.java:38) I am new to java mail.Any help would be greatly appreciated.

    Read the article

  • Is this possible? Overflow-y:visible with overflow-x:scroll/auto

    - by Kostrzak
    We have a problem in our team which we cannot solve :/ We made our own Grid control. When you click on the icon next to column name, the pop-up div (call it divFilter) appears and you can set filtering there. There can be dynamically generated div for each column so we can have f.e 5 divFilters in 5 different places. It works, but the only problem is that when there is for example 1-2 records on the Grid, the pop-up div will be displayed under horizontal scroll of div. We've tried with z-index but it looks like that won't work. We can set overflow:visible but we also need that horizontal scroll(our grids have up to 50 columns). We thought that we can solve it buy setting overflow-y visible and overflow-x:scroll but according to our tests and that page: http://www.brunildo.org/test/Overflowxy2.html it isn't possible(for IE7,IE8). I've also found this similar question CSS overflow-y:visible, overflow-x:scroll ,but our pop-up div must be position:absolute, because we need to position them under columns. Any ideas or workaround to it? Is it even possible to set it only with CSS without using Javascript(for dynamically changing gridview hight etc.). Thanks!!

    Read the article

  • Autodetect timezone in Rails given UTC offset and DST

    - by Jose
    I basically want to autodetect a user's timezone using Rails. I can use this JS code at the user's browser (http://www.onlineaspect.com/2007/06/08/auto-detect-a-time-zone-with-javascript/) to send a form with the UTC offset and the fact that the time zone observes DST during summer or not, in the user's time zone. Once I have that info in the server, I want to select the matching time zone. In Rails, I can get a list of time zones with ActiveSupport::TimeZone.all. Also, I can filter zones by utf offset thanks to the utc_offset method. However, I don't know how to filter the timezones that do/don't observe DST. E.g. suppose a user lives in Amsterdam. Filtering by UTC offset will return Berlin, Belgrado, Madrid, etc timezones, as well as West Central Africa. All of them, but West Central Africa, would be appropriate timezones for a user in Amsterdam (as they provide the same time/date), but I need to filter West Central Africa, which does not perform DST in summer. How can I do this in Rails? Also, are any of my assumptions wrong?

    Read the article

  • LINQ-to-SQL IN/Contains() for Nullable<T>

    - by Craig Walker
    I want to generate this SQL statement in LINQ: select * from Foo where Value in ( 1, 2, 3 ) The tricky bit seems to be that Value is a column that allows nulls. The equivalent LINQ code would seem to be: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value) select foo; This, of course, doesn't compile, since foo.Value is an int? and values is typed to int. I've tried this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); IEnumerable<int?> nullables = values.Select( value => new Nullable<int>(value)); var myFoos = from foo in foos where nullables.Contains(foo.Value) select foo; ...and this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value.Value) select foo; Both of these versions give me the results I expect, but they do not generate the SQL I want. It appears that they're generating full-table results and then doing the Contains() filtering in-memory (ie: in plain LINQ, without -to-SQL); there's no IN clause in the DataContext log. Is there a way to generate a SQL IN for Nullable types?

    Read the article

  • Merge replication server side foreign key violation from unpublished table

    - by Reiste
    We are using SQL Server 2005 Merge Replication with SQL CE 3.5 clients. We are using partitions with filtering for the separate subscriptions, and nHibernate for the ORM mapping. There is automatic ID range management from SQL Server for the subscriptions. We have a table, Item, and a table with a foreign key to Item - ItemHistory. Both of these are replicated down, filtered according to the subscription. Item has a column called UserId, and is filtered per subscription with this filter: WHERE UserId IN (SELECT... [complicated subselect]...) ItemHistory hangs off Item in the publication filter articles. On the server, we have a table ItemHistoryExport, which has a foreign key to ItemHistory. ItemHistoryExport is not published. Entries in the Item and ItemHistory tables are never deleted, on the server or the client. However, the "ownership" of items (and hence their ItemHistories) MAY change, which causes them to be moved from one client subscription/partition to another from time to time. When we sync, we occasionally get the following error: A row delete at '48269404 - 4108383dbb11' could not be propagated to 'MyServer\MyInstance.MyDatabase'. This failure can be caused by a constraint violation. The DELETE statement conflicted with the REFERENCE constraint "FK_ItemHistoryExport_ItemHistory". The conflict occurred in database "MyDatabase", table "dbo.ItemHistoryExport", column 'ItemHistoryId'. Can anyone help us understand why this happens? There shouldn't ever be a delete happening on the server side.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >