Search Results

Search found 21433 results on 858 pages for 'query execution plans'.

Page 335/858 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • A Better Way To Extract Date From DateTime Columns [SQL Server]

    - by Gopinath
    Quite a long ago I wrote about a SQL Server programming tip on how to extract date part from a DATETIME column. The post discusses about using of T SQL function convert() to get date part. One of the readers of the post, tipped me about a better way of extracting date part and here is the SQL query he sent to us SELECT DateAdd(day, DateDiff(day, 0, getdate()), 0); In clean way this query trims off time part from the DATETIME value. I rate this solution better than the one I wrote long ago as this one does not depend on any string operations. According the commenter, this method is faster compared to the other. What do you say? Thanks Yamo This article titled,A Better Way To Extract Date From DateTime Columns [SQL Server], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Cannot save all of the property settings for this Web Part.

    - by ybbest
    I would like to display all the items of custom content type of Animals in a SharePoint content query WebPart. After choosing the appropriate values in the web part editor and click on Save I got this error: Cannot save all of the property settings for this Web Part. There is a problem with one or more of the field values below. But when I examine all the values below , it does not flag any error information. I finally manage to locate the error flag after I expand the presentation section. I then delete the text in the Link textbox , now I can save the settings. However , I think the error message should have been more specific so that users can quickly locate the error. The worst part for this is that I did not even change anything in the presentation section, I merely configure the Source in the Query section. Well, I guess I am still new to SharePoint, I just have to get used to these generic error message ):

    Read the article

  • MySQL Enterprise Monitor 3.0.3 Is Now Available

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud with the November update in about 1 week. This is a maintenance release that fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • Get script for every action in SQL Server Management Studio

    I am always conscious to keep a record of all operations performed on my database servers. Operations through T-SQL in an SSMS query pane can easily be saved in query files. For table modifications through SSMS designer I have predefined setting to generate T-SQL scripts. However there are numerous database and server level tasks that I use the SSMS GUI and I would like to have a script of these changes for later reference. Examples of such actions through the SSMS GUI are backup/restore, changing compatibility level of a database, manipulating permissions, dealing with database or log files or creating/manipulating any login/user. I am looking for any way to generate T-SQL code for such actions, so that it may be kept for later reference

    Read the article

  • Writing a spell checker similar to "did you mean"

    - by user888734
    I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's "Did you mean?" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?

    Read the article

  • When and How is an image cached for an ASPX with ContentType = image/jpeg ?

    - by Aamir Hasan
     In asp.net you can cache your page. You can vary the output cache by the followingThe query string in an initial request (HTTP GET).Control values passed on postback (HTTP POST values).The HTTP headers passed with a request.The major version number of the browser making the request.      A custom string in the page. In that case, you create custom code in the Global.asax file to specify the page's caching behavior.Link: http://msdn2.microsoft.com/en-us/library/xadzbzd6(VS.80).aspxyou can set the output caching for your GetImage.aspx, so that you dont have to requery the database every image request ,but you must use varybyParam , so that you have a cached version for every parameters arrangement:set the output cache for your page like this :At top of ASPX page: <%@ OutputCache Duration="600" VaryByParam="ID,Height,Width" %>VaryByParam  attribute allows you to vary the cached output depending on the query string.Adding this will make your images cached for 600 seconds, so that if the image request within this period ,the cahed version will be returned

    Read the article

  • SSRS optional parameters settings

    - by Natasa Gavrilovic
    Recently I had to create couple SQL Server Reports (SSRS) with optional parameters built in. It took me a while to refresh memory how this can be done. It was very simple to create reports and processes behind, but connecting these two were are little bit challenging – stored procedure was tested and worked fine, but when the report was passing optional parameters it didn’t returned expected results. After tweaking SQL stored procedures and reports parameter options, the following approach turn to be the winning one. 1) Defining report parameters: From Menu bar select ‘View’ and ‘Report Data’ Newly open window should have ‘Parameters’ folder display Right click on this folder and select ‘Add new parameter...’                             Default values need to be added from a query                 A query values need to include ‘’ (empty string) – as highlighted                   2) SQL stored procedure should have CASE statements inside WHERE and it was the only way that a report was getting correct results back.

    Read the article

  • Slow in the Application, Fast in SSMS? Understanding Performance Mysteries

    When I read various forums about SQL Server, I frequently see questions from deeply mystified posters. They have identified a slow query or stored procedure in their application. They cull the SQL batch from the application and run it in SQL Server Management Studio (SSMS) to analyse it, only to find that the response is instantaneous. At this point they are inclined to think that SQL Server is all about magic. A similar mystery is when a developer has extracted a query in his stored procedure to run it stand-alone only to find that it runs much faster – or much slower – than inside the procedure.

    Read the article

  • Classless tables possible with Datamapper?

    - by barerd
    I have an Item class with the following attributes: itemId,name,weight,volume,price,required_skills,required_items. Since the last two attributes are going to be multivalued, I removed them and create new schemes like: itemID,required_skill (itemID is foreign key, itemID and required_skill is primary key.) Now, I'm confused how to create/use this new table. Here are the options that came to my mind: 1) The relationship between Items and Required_skills is one-to-many, so I may create a RequiredSkill class, which belongs_to Item, which in turn has n RequiredSkills. Then I can do Item.get(1).requiredskills. This sounds most logical to me. 2) Since required_skills may well be thought of as constants (since they resemble rules), I may put them into a hash or gdbm database or another sql table and query from there, which I don't prefer. My question is: is there sth like a modelless table in datamapper, where datamapper is responsible from the creation and integrity of the table and allows me to query it in datamapper way, but does not require a class, like I may do it in sql?

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • What is the best way to INSERT a large dataset into a MySQL database (or any database in general)

    - by Joe
    As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?

    Read the article

  • Listing SQL Columns

    - by Bunch
    When I am writing up stored procedures in SSMS sometimes I need to know what column types are used in a table. For instance I will know the table name but I might not remember exactly the length of a varchar column or if a column stored the data as an integer or varchar. And I may not want to scroll through all the tables in Object Explorer to find the one I want. A lot of times it is easier if I can just write a quick query to pull up the information I need. The syntax to do something like this is pretty easy. SELECT COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH FROM yourdbname.information_schema.columns WHERE TABLE_NAME = ‘yourtablename’ After running that you will get a listing in the Results pane just like any other query with the column name, data type and length (if any). Technorati Tags: SQL

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • NSURLSession and amazon S3 uploads

    - by George Green
    I have an app which is currently uploading images to amazon S3. I have been trying to switch it from using NSURLConnection to NSURLSession so that the uploads can continue while the app is in the background! I seem to be hitting a bit of an issue. The NSURLRequest is created and passed to the NSURLSession but amazon sends back a 403 - forbidden response, if I pass the same request to a NSURLConnection it uploads the file perfectly. Here is the code that creates the response: NSString *requestURLString = [NSString stringWithFormat:@"http://%@.%@/%@/%@", BUCKET_NAME, AWS_HOST, DIRECTORY_NAME, filename]; NSURL *requestURL = [NSURL URLWithString:requestURLString]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:requestURL cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:60.0]; // Configure request [request setHTTPMethod:@"PUT"]; [request setValue:[NSString stringWithFormat:@"%@.%@", BUCKET_NAME, AWS_HOST] forHTTPHeaderField:@"Host"]; [request setValue:[self formattedDateString] forHTTPHeaderField:@"Date"]; [request setValue:@"public-read" forHTTPHeaderField:@"x-amz-acl"]; [request setHTTPBody:imageData]; And then this signs the response (I think this came from another SO answer): NSString *contentMd5 = [request valueForHTTPHeaderField:@"Content-MD5"]; NSString *contentType = [request valueForHTTPHeaderField:@"Content-Type"]; NSString *timestamp = [request valueForHTTPHeaderField:@"Date"]; if (nil == contentMd5) contentMd5 = @""; if (nil == contentType) contentType = @""; NSMutableString *canonicalizedAmzHeaders = [NSMutableString string]; NSArray *sortedHeaders = [[[request allHTTPHeaderFields] allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)]; for (id key in sortedHeaders) { NSString *keyName = [(NSString *)key lowercaseString]; if ([keyName hasPrefix:@"x-amz-"]){ [canonicalizedAmzHeaders appendFormat:@"%@:%@\n", keyName, [request valueForHTTPHeaderField:(NSString *)key]]; } } NSString *bucket = @""; NSString *path = request.URL.path; NSString *query = request.URL.query; NSString *host = [request valueForHTTPHeaderField:@"Host"]; if (![host isEqualToString:@"s3.amazonaws.com"]) { bucket = [host substringToIndex:[host rangeOfString:@".s3.amazonaws.com"].location]; } NSString* canonicalizedResource; if (nil == path || path.length < 1) { if ( nil == bucket || bucket.length < 1 ) { canonicalizedResource = @"/"; } else { canonicalizedResource = [NSString stringWithFormat:@"/%@/", bucket]; } } else { canonicalizedResource = [NSString stringWithFormat:@"/%@%@", bucket, path]; } if (query != nil && [query length] > 0) { canonicalizedResource = [canonicalizedResource stringByAppendingFormat:@"?%@", query]; } NSString* stringToSign = [NSString stringWithFormat:@"%@\n%@\n%@\n%@\n%@%@", [request HTTPMethod], contentMd5, contentType, timestamp, canonicalizedAmzHeaders, canonicalizedResource]; NSString *signature = [self signatureForString:stringToSign]; [request setValue:[NSString stringWithFormat:@"AWS %@:%@", self.S3AccessKey, signature] forHTTPHeaderField:@"Authorization"]; Then if I use this line of code: [NSURLConnection connectionWithRequest:request delegate:self]; It works and uploads the file, but if I use: NSURLSessionUploadTask *task = [self.session uploadTaskWithRequest:request fromFile:[NSURL fileURLWithPath:filePath]]; [task resume]; I get the forbidden error..!? Has anyone tried uploading to S3 with this and hit similar issues? I wonder if it is to do with the way the session pauses and resumes uploads, or it is doing something funny to the request..? One possible solution would be to upload the file to an interim server that I control and have that forward it to S3 when it is complete... but this is clearly not an ideal solution! Any help is much appreciated!! Thanks!

    Read the article

  • How to programmatically set the ContextKey of an AutoComplete Extender placed in a gridviews footer

    - by rism
    As per the thread title I want to programmatically set the ContextKey of an AutoComplete Extender placed in a gridviews footer row. Background is I have a data model that has Territory and Journey Plans for those territories. Customers need to be added to the journey plans but only those customers that belong to the Territory that owns the Journey Plan. In the footer row of my grid I have added a textbox which allows a user to enter account code of customer. Attached to this textbox is an autocomplete extender. I need to do a select against the db for customers with account code like prefix where customer in territory. But there is no way to provide territory id. I thought I could just: <asp:TemplateField HeaderStyle-Width="100" HeaderStyle-HorizontalAlign="Left" HeaderText="LKey" ItemStyle-HorizontalAlign="Left" ItemStyle-Width="100"> <ItemTemplate> <asp:Label ID="lblLKey" runat="server" Text='<%# Eval("LKey") %>' /> </ItemTemplate> <FooterTemplate> <asp:TextBox ID="txtLKey" CssClass="sitepagetext" runat="server" MaxLength="15" Width="60" /> <cc1:AutoCompleteExtender ID="Autocompleteextender1" MinimumPrefixLength="4" CompletionInterval="1000" CompletionSetCount="10" ServiceMethod="GetCompletionList" ContextKey="<% this.Controller.TerritoryId %>" TargetControlID="txtLKey" runat="server"> </cc1:AutoCompleteExtender> </FooterTemplate> </asp:TemplateField> for the relevant field in the grid but when the page is run I get the following markup for the autoextender: Sys.Application.add_init(function() { $create(AjaxControlToolkit.AutoCompleteBehavior, {"contextKey":"\u003c% this.Controller.TerritoryId %\u003e","delimiterCharacters":"","id":"ctl00_ctl00_mainContentHolder_serviceContentHolder_qlgvJourneyPlanCustomers_ctl03_Autocompleteextender1","minimumPrefixLength":4,"serviceMethod":"GetCompletionList","servicePath":"/Views/CRM/JourneyPlans/CustomersEditor.aspx","useContextKey":true}, null, null, $get("ctl00_ctl00_mainContentHolder_serviceContentHolder_qlgvJourneyPlanCustomers_ctl03_txtLKey")); }); //]]> </script> ContextKey value doesnt get evaluated. It just uses the literal text. Any thoughts?

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • DDD: Aggregate Roots

    - by Mosh
    Hello, I need help with finding my aggregate root and boundary. I have 3 Entities: Plan, PlannedRole and PlannedTraining. Each Plan can include many PlannedRoles and PlannedTrainings. Solution 1: At first I thought Plan is the aggregate root because PlannedRole and PlannedTraining do not make sense out of the context of a Plan. They are always within a plan. Also, we have a business rule that says each Plan can have a maximum of 3 PlannedRoles and 5 PlannedTrainings. So I thought by nominating the Plan as the aggregate root, I can enforce this invariant. However, we have a Search page where the user searches for Plans. The results shows a few properties of the Plan itself (and none of its PlannedRoles or PlannedTrainings). I thought if I have to load the entire aggregate, it would have a lot of overhead. There are nearly 3000 plans and each may have a few children. Loading all these objects together and then ignoring PlannedRoles and PlannedTrainings in the search page doesn't make sense to me. Solution 2: I just realized the user wants 2 more search pages where they can search for Planned Roles or Planned Trainings. That made me realize they are trying to access these objects independently and "out of" the context of Plan. So I thought I was wrong about my initial design and that is how I came up with this solution. So, I thought to have 3 aggregates here, 1 for each Entity. This approach enables me to search for each Entity independently and also resolves the performance issue in solution 1. However, using this approach I cannot enforce the invariant I mentioned earlier. There is also another invariant that states a Plan can be changed only if it is of a certain status. So, I shouldn't be able to add any PlannedRoles or PlannedTrainings to a Plan that is not in that status. Again, I can't enforce this invariant with the second approach. Any advice would be greatly appreciated. Cheers, Mosh

    Read the article

  • Solr WordDelimiterFilter + Lucene Highlighter

    - by Lucas
    I am trying to get the Highlighter class from Lucene to work properly with tokens coming from Solr's WordDelimiterFilter. It works 90% of the time, but if the matching text contains a ',' such as "1,500" the output is incorrect: Expected: 'test 1,500 this' Observed: 'test 11,500 this' I am not currently sure whether it is Highlighter messing up the recombination or WordDelimiterFilter messing up the tokenization but something is unhappy. Here are the relevant dependencies from my pom: org.apache.lucene lucene-core 2.9.3 jar compile org.apache.lucene lucene-highlighter 2.9.3 jar compile org.apache.solr solr-core 1.4.0 jar compile And here is a simple JUnit test class demonstrating the problem: package test.lucene; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import java.io.IOException; import java.io.Reader; import java.util.HashMap; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.queryParser.ParseException; import org.apache.lucene.queryParser.QueryParser; import org.apache.lucene.search.Query; import org.apache.lucene.search.highlight.Highlighter; import org.apache.lucene.search.highlight.InvalidTokenOffsetsException; import org.apache.lucene.search.highlight.QueryScorer; import org.apache.lucene.search.highlight.SimpleFragmenter; import org.apache.lucene.search.highlight.SimpleHTMLFormatter; import org.apache.lucene.util.Version; import org.apache.solr.analysis.StandardTokenizerFactory; import org.apache.solr.analysis.WordDelimiterFilterFactory; import org.junit.Test; public class HighlighterTester { private static final String PRE_TAG = "<b>"; private static final String POST_TAG = "</b>"; private static String[] highlightField( Query query, String fieldName, String text ) throws IOException, InvalidTokenOffsetsException { SimpleHTMLFormatter formatter = new SimpleHTMLFormatter( PRE_TAG, POST_TAG ); Highlighter highlighter = new Highlighter( formatter, new QueryScorer( query, fieldName ) ); highlighter.setTextFragmenter( new SimpleFragmenter( Integer.MAX_VALUE ) ); return highlighter.getBestFragments( getAnalyzer(), fieldName, text, 10 ); } private static Analyzer getAnalyzer() { return new Analyzer() { @Override public TokenStream tokenStream( String fieldName, Reader reader ) { // Start with a StandardTokenizer TokenStream stream = new StandardTokenizerFactory().create( reader ); // Chain on a WordDelimiterFilter WordDelimiterFilterFactory wordDelimiterFilterFactory = new WordDelimiterFilterFactory(); HashMap<String, String> arguments = new HashMap<String, String>(); arguments.put( "generateWordParts", "1" ); arguments.put( "generateNumberParts", "1" ); arguments.put( "catenateWords", "1" ); arguments.put( "catenateNumbers", "1" ); arguments.put( "catenateAll", "0" ); wordDelimiterFilterFactory.init( arguments ); return wordDelimiterFilterFactory.create( stream ); } }; } @Test public void TestHighlighter() throws ParseException, IOException, InvalidTokenOffsetsException { String fieldName = "text"; String text = "test 1,500 this"; String queryString = "1500"; String expected = "test " + PRE_TAG + "1,500" + POST_TAG + " this"; QueryParser parser = new QueryParser( Version.LUCENE_29, fieldName, getAnalyzer() ); Query q = parser.parse( queryString ); String[] observed = highlightField( q, fieldName, text ); for ( int i = 0; i < observed.length; i++ ) { System.out.println( "\t" + i + ": '" + observed[i] + "'" ); } if ( observed.length > 0 ) { System.out.println( "Expected: '" + expected + "'\n" + "Observed: '" + observed[0] + "'" ); assertEquals( expected, observed[0] ); } else { assertTrue( "No matches found", false ); } } } Anyone have any ideas or suggestions?

    Read the article

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Drupal's not reading correct values from DB

    - by John
    Hey Everyone, Here is my current problem. I am working with the chat module and I'm building a module that notifies users via AJAX that they have been invited to a chat. The current table structure for the invites table looks like this: |-------------------------------------------------------------------------| | CCID | NID | INVITER_UID | INVITEE_UID | NOTIFIED | ACCEPTED | |-------------------------------------------------------------------------| | int | int | int | int | (0 or 1) | (0 or 1) | |-------------------------------------------------------------------------| I'm using the periodical updater plug-in for JQuery to continually poll the server to check for invites. When an invite is found, I set the notified from 0 to 1. However, my problem is the periodical updater. When I first see that there is an invite, I notify the user, and set notified to 1. On the next select though, I get the same results before, as if the update didn't work. But, when I got check the database, I can see that it worked just fine. It's as if the query is querying a cache, but I can't figure it out. My code for the periodical updater is as follows: window.onload = function() { var uid = $('a#chat_uid').html(); $.PeriodicalUpdater( '/steelylib/sites/all/modules/_chat_whos_online/ajax/ajax.php', //url to service { method: 'get', //send data via... data: {uid: uid}, //data to send minTimeout: '1000', //min time before server is polled (milli-sec.) maxTimeout: '20000', //max time before server is polled (milli-sec.) multiplyer: '1.5', //multiply against curretn poll time every time constant data is returned type: 'text', //type of data recieved (response type) maxCalls: 0, //max calls to make (0=unlimited) autoStop: 0 //max calls with constant data (0=unlimited/disabled) }, function(data) //callback function { alert( data ); //for now, until i get it working } ); } And my code for the ajax call is as follows: <?php #bootstrap Drupal, and call function, passing current user's uid. function _create_chat_node_check_invites($uid) { cache_clear_all('chatroom_chat_list', 'cache'); $query = "SELECT * FROM {chatroom_chat_invite} WHERE notified=0 AND invitee_uid=%d and accepted=0"; $query_results = db_query( $query, $uid ); $json = '{"invites":['; while( $row = db_fetch_object($query_results) ) { var_dump($row); global $base_url; $url = $base_url . '/content/privatechat' . $uid .'-' . $row->inviter_uid; $inviter = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->inviter_uid ) ); $invitee = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->invitee_uid ) ); #reset table $query = "UPDATE {chatroom_chat_invite} " ."SET notified=1 " ."WHERE inviter_uid=%d AND invitee_uid=%d"; db_query( $query, $row->inviter_uid, $row->invitee_uid ); $json .= '['; $json .= '"' . $url . '",'; $json .= '"' . ($inviter->name) . '",'; $json .= '"' . ($invitee->name) . '"' ; $json .= '],'; } $json = substr($json, 0, -1); $json .= ']}'; return $json; } ?> I can't figure out what is going wrong, any help is greatly appreciated!

    Read the article

  • How to create nested ViewComponents in Monorail and NVelocity?

    - by rob_g
    I have been asked to update the menu on a website we maintain. The website uses Castle Windors Monorail and NVelocity as the template. The menu is currently rendered using custom made subclasses of ViewComponent, which render li elements. At the moment there is only one (horizontal) level, so the current mechanism is fine. I have been asked to add drop down menus to some of the existing menus. As this is the first time I have seen Monorail and NVelocity, I'm a little lost. What currently exists: <ul> #component(MenuComponent with "title=Home" "hover=autoselect" "link=/") #component(MenuComponent with "title=Videos" "hover=autoselect") #component(MenuComponent with "title=VPS" "hover=autoselect" "link=/vps") #component(MenuComponent with "title=Add-Ons" "hover=autoselect" "link=/addons") #component(MenuComponent with "title=Hosting" "hover=autoselect" "link=/hosting") #component(MenuComponent with "title=Support" "hover=autoselect" "link=/support") #component(MenuComponent with "title=News" "hover=autoselect" "link=/news") #component(MenuComponent with "title=Contact Us" "hover=autoselect" "link=/contact-us") </ul> Is it possible to have nested MenuComponents (or a new SubMenuComponent) something like: <ul> #component(MenuComponent with "title=Home" "hover=autoselect" "link=/") #component(MenuComponent with "title=Videos" "hover=autoselect") #blockcomponent(MenuComponent with "title=VPS" "hover=autoselect" "link=/vps") #component(SubMenuComponent with "title="Plans" "hover=autoselect" "link=/vps/plans") #component(SubMenuComponent with "title="Operating Systems" "hover=autoselect" "link=/vps/os") #component(SubMenuComponent with "title="Supported Applications" "hover=autoselect" "link=/vps/apps") #end #component(MenuComponent with "title=Add-Ons" "hover=autoselect" "link=/addons") #component(MenuComponent with "title=Hosting" "hover=autoselect" "link=/hosting") #component(MenuComponent with "title=Support" "hover=autoselect" "link=/support") #component(MenuComponent with "title=News" "hover=autoselect" "link=/news") #component(MenuComponent with "title=Contact Us" "hover=autoselect" "link=/contact-us") </ul> I need to draw the sub menu (ul and li elements) inside the overridden Render method on MenuComponent, so using nested ViewComponent derivatives may not work. I would like a method keep the basically declarative method for creating menus, if at all possible.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >