Search Results

Search found 17627 results on 706 pages for 'hierarchical query'.

Page 301/706 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Remove a bad/erroneous WebPart from a SharePoint page

    - by KunaalKapoor
    If you've added a poorly written webpart to your 'default.aspx' page, the consequence of this action will be that you won't be able to load the page anymore... Don't be sad, there is still a way to remove the webpart from the page :) (Yes, even removing the webpart from the webpart gallery would not solve this issue).Steps to fix this:1. Append the following query string to your URL: ?Contents=1.Once you've added Contents=1 as a query string to the webpart page's URL it will display the Webpart Maintenance Page. Example: http://mysharepointserver/default.aspx?contents=12. On that page you can now see the webparts added to the page, delete the problematic webpart.Now try reloading the default.aspx page... Tadaaa!!! you can view your page again :)3. Leave a thank you note @ comments section :)

    Read the article

  • MySQL Enterprise Monitor 3.0.3 Is Now Available

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud with the November update in about 1 week. This is a maintenance release that fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • Cannot save all of the property settings for this Web Part.

    - by ybbest
    I would like to display all the items of custom content type of Animals in a SharePoint content query WebPart. After choosing the appropriate values in the web part editor and click on Save I got this error: Cannot save all of the property settings for this Web Part. There is a problem with one or more of the field values below. But when I examine all the values below , it does not flag any error information. I finally manage to locate the error flag after I expand the presentation section. I then delete the text in the Link textbox , now I can save the settings. However , I think the error message should have been more specific so that users can quickly locate the error. The worst part for this is that I did not even change anything in the presentation section, I merely configure the Source in the Query section. Well, I guess I am still new to SharePoint, I just have to get used to these generic error message ):

    Read the article

  • Get script for every action in SQL Server Management Studio

    I am always conscious to keep a record of all operations performed on my database servers. Operations through T-SQL in an SSMS query pane can easily be saved in query files. For table modifications through SSMS designer I have predefined setting to generate T-SQL scripts. However there are numerous database and server level tasks that I use the SSMS GUI and I would like to have a script of these changes for later reference. Examples of such actions through the SSMS GUI are backup/restore, changing compatibility level of a database, manipulating permissions, dealing with database or log files or creating/manipulating any login/user. I am looking for any way to generate T-SQL code for such actions, so that it may be kept for later reference

    Read the article

  • Writing a spell checker similar to "did you mean"

    - by user888734
    I'm hoping to write a spellchecker for search queries in a web application - not unlike Google's "Did you mean?" The algorithm will be loosely based on this: http://catalog.ldc.upenn.edu/LDC2006T13 In short, it generates correction candidates and scores them on how often they appear (along with adjacent words in the search query) in an enormous dataset of known n-grams - Google Web 1T - which contains well over 1 billion 5-grams. I'm not using the Web 1T dataset, but building my n-gram sets from my own documents - about 200k docs, and I'm estimating tens or hundreds of millions of n-grams will be generated. This kind of process is pushing the limits of my understanding of basic computing performance - can I simply load my n-grams into memory in a hashtable or dictionary when the app starts? Is the only limiting factor the amount of memory on the machine? Or am I barking up the wrong tree? Perhaps putting all my n-grams in a graph database with some sort of tree query optimisation? Could that ever be fast enough?

    Read the article

  • SSRS optional parameters settings

    - by Natasa Gavrilovic
    Recently I had to create couple SQL Server Reports (SSRS) with optional parameters built in. It took me a while to refresh memory how this can be done. It was very simple to create reports and processes behind, but connecting these two were are little bit challenging – stored procedure was tested and worked fine, but when the report was passing optional parameters it didn’t returned expected results. After tweaking SQL stored procedures and reports parameter options, the following approach turn to be the winning one. 1) Defining report parameters: From Menu bar select ‘View’ and ‘Report Data’ Newly open window should have ‘Parameters’ folder display Right click on this folder and select ‘Add new parameter...’                             Default values need to be added from a query                 A query values need to include ‘’ (empty string) – as highlighted                   2) SQL stored procedure should have CASE statements inside WHERE and it was the only way that a report was getting correct results back.

    Read the article

  • When and How is an image cached for an ASPX with ContentType = image/jpeg ?

    - by Aamir Hasan
     In asp.net you can cache your page. You can vary the output cache by the followingThe query string in an initial request (HTTP GET).Control values passed on postback (HTTP POST values).The HTTP headers passed with a request.The major version number of the browser making the request.      A custom string in the page. In that case, you create custom code in the Global.asax file to specify the page's caching behavior.Link: http://msdn2.microsoft.com/en-us/library/xadzbzd6(VS.80).aspxyou can set the output caching for your GetImage.aspx, so that you dont have to requery the database every image request ,but you must use varybyParam , so that you have a cached version for every parameters arrangement:set the output cache for your page like this :At top of ASPX page: <%@ OutputCache Duration="600" VaryByParam="ID,Height,Width" %>VaryByParam  attribute allows you to vary the cached output depending on the query string.Adding this will make your images cached for 600 seconds, so that if the image request within this period ,the cahed version will be returned

    Read the article

  • Slow in the Application, Fast in SSMS? Understanding Performance Mysteries

    When I read various forums about SQL Server, I frequently see questions from deeply mystified posters. They have identified a slow query or stored procedure in their application. They cull the SQL batch from the application and run it in SQL Server Management Studio (SSMS) to analyse it, only to find that the response is instantaneous. At this point they are inclined to think that SQL Server is all about magic. A similar mystery is when a developer has extracted a query in his stored procedure to run it stand-alone only to find that it runs much faster – or much slower – than inside the procedure.

    Read the article

  • Classless tables possible with Datamapper?

    - by barerd
    I have an Item class with the following attributes: itemId,name,weight,volume,price,required_skills,required_items. Since the last two attributes are going to be multivalued, I removed them and create new schemes like: itemID,required_skill (itemID is foreign key, itemID and required_skill is primary key.) Now, I'm confused how to create/use this new table. Here are the options that came to my mind: 1) The relationship between Items and Required_skills is one-to-many, so I may create a RequiredSkill class, which belongs_to Item, which in turn has n RequiredSkills. Then I can do Item.get(1).requiredskills. This sounds most logical to me. 2) Since required_skills may well be thought of as constants (since they resemble rules), I may put them into a hash or gdbm database or another sql table and query from there, which I don't prefer. My question is: is there sth like a modelless table in datamapper, where datamapper is responsible from the creation and integrity of the table and allows me to query it in datamapper way, but does not require a class, like I may do it in sql?

    Read the article

  • BIP 11g Dynamic SQL

    - by Tim Dexter
    Back in the 10g release, if you wanted something beyond the standard query for your report extract; you needed to break out your favorite text editor. You gotta love 'vi' and hate emacs, am I right? And get to building a data template, they were/are lovely to write, such fun ... not! Its not fun writing them by hand but, you do get to do some cool stuff around the data extract including dynamic SQL. By that I mean the ability to add content dynamically to your your query at runtime. With 11g, we spoiled you with a visual builder, no more vi or notepad sessions, a friendly drag and drop interface allowing you to build hierarchical data sets, calculated columns, summary columns, etc. You can still create the dynamic SQL statements, its not so well documented right now, in lieu of doc updates here's the skinny. If you check out the 10g process to create dynamic sql in the docs. You need to create a data trigger function where you assign the dynamic sql to a global variable that's matched in your report SQL. In 11g, the process is really the same, BI Publisher just provides a bit more help to define what trigger code needs to be called. You still need to create the function and place it inside a package in the db. Here's a simple plsql package with the 'beforedata' function trigger. Spec create or replace PACKAGE BIREPORTS AS whereCols varchar2(2000); FUNCTION beforeReportTrig return boolean; end BIREPORTS; Body create or replace PACKAGE BODY BIREPORTS AS   FUNCTION beforeReportTrig return boolean AS   BEGIN       whereCols := ' and d.department_id = 100';     RETURN true;   END beforeReportTrig; END BIREPORTS; you'll notice the additional where clause (whereCols - declared as a public variable) is hard coded. I'll cover parameterizing that in my next post. If you can not wait, check the 10g docs for an example. I have my package compiling successfully in the db. Now, onto the BIP data model definition. 1. Create a new data model and go ahead and create your query(s) as you would normally. 2. In the query dialog box, add in the variables you want replaced at runtime using an ampersand rather than a colon e.g. &whereCols.   select     d.DEPARTMENT_NAME, ...  from    "OE"."EMPLOYEES" e,     "OE"."DEPARTMENTS" d  where   d."DEPARTMENT_ID"= e."DEPARTMENT_ID" &whereCols   Note that 'whereCols' matches the global variable name in our package. When you click OK to clear the dialog, you'll be asked for a default value for the variable, just use ' and 1=1' That leading space is important to keep the SQL valid ie required whitespace. This value will be used for the where clause if case its not set by the function code. 3. Now click on the Event Triggers tree node and create a new trigger of the type Before Data. Type in the default package name, in my example, 'BIREPORTS'. Then hit the update button to get BIP to fetch the valid functions.In my case I get to see the following: Select the BEFOREREPORTTRIG function (or your name) and shuttle it across. 4. Save your data model and now test it. For now, you can update the where clause via the plsql package. Next time ... parametrizing the dynamic clause.

    Read the article

  • What is the best way to INSERT a large dataset into a MySQL database (or any database in general)

    - by Joe
    As part of a PHP project, I have to insert a row into a MySQL database. I'm obviously used to doing this, but this required inserting into 90 columns in one query. The resulting query looks horrible and monolithic (especially inserting my PHP variables as the values): INSERT INTO mytable (column1, colum2, ..., column90) VALUES ('value1', 'value2', ..., 'value90') and I'm concerned that I'm not going about this in the right way. It also took me a long (boring) time just to type everything in and testing writing the test code will be equally tedious I fear. How do professionals go about quickly writing and testing these queries? Is there a way I can speed up the process?

    Read the article

  • Free eBook: SQL Server Execution Plans, Second Edition

    Every day, out in the various online forums devoted to SQL Server, and on Twitter, the same types of questions come up repeatedly: Why is this query running slowly? Why is SQL Server ignoring my index? Why does this query run quickly sometimes and slowly at others? My response is the same in each case: have you looked at the execution plan? "A real time saver" Andy Doyle, Head of IT ServicesAndy and his team saved time by automating backup and restores with SQL Backup Pro. Find out how much time you could save. Download a free trial now.

    Read the article

  • Listing SQL Columns

    - by Bunch
    When I am writing up stored procedures in SSMS sometimes I need to know what column types are used in a table. For instance I will know the table name but I might not remember exactly the length of a varchar column or if a column stored the data as an integer or varchar. And I may not want to scroll through all the tables in Object Explorer to find the one I want. A lot of times it is easier if I can just write a quick query to pull up the information I need. The syntax to do something like this is pretty easy. SELECT COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH FROM yourdbname.information_schema.columns WHERE TABLE_NAME = ‘yourtablename’ After running that you will get a listing in the Results pane just like any other query with the column name, data type and length (if any). Technorati Tags: SQL

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • NSURLSession and amazon S3 uploads

    - by George Green
    I have an app which is currently uploading images to amazon S3. I have been trying to switch it from using NSURLConnection to NSURLSession so that the uploads can continue while the app is in the background! I seem to be hitting a bit of an issue. The NSURLRequest is created and passed to the NSURLSession but amazon sends back a 403 - forbidden response, if I pass the same request to a NSURLConnection it uploads the file perfectly. Here is the code that creates the response: NSString *requestURLString = [NSString stringWithFormat:@"http://%@.%@/%@/%@", BUCKET_NAME, AWS_HOST, DIRECTORY_NAME, filename]; NSURL *requestURL = [NSURL URLWithString:requestURLString]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:requestURL cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:60.0]; // Configure request [request setHTTPMethod:@"PUT"]; [request setValue:[NSString stringWithFormat:@"%@.%@", BUCKET_NAME, AWS_HOST] forHTTPHeaderField:@"Host"]; [request setValue:[self formattedDateString] forHTTPHeaderField:@"Date"]; [request setValue:@"public-read" forHTTPHeaderField:@"x-amz-acl"]; [request setHTTPBody:imageData]; And then this signs the response (I think this came from another SO answer): NSString *contentMd5 = [request valueForHTTPHeaderField:@"Content-MD5"]; NSString *contentType = [request valueForHTTPHeaderField:@"Content-Type"]; NSString *timestamp = [request valueForHTTPHeaderField:@"Date"]; if (nil == contentMd5) contentMd5 = @""; if (nil == contentType) contentType = @""; NSMutableString *canonicalizedAmzHeaders = [NSMutableString string]; NSArray *sortedHeaders = [[[request allHTTPHeaderFields] allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)]; for (id key in sortedHeaders) { NSString *keyName = [(NSString *)key lowercaseString]; if ([keyName hasPrefix:@"x-amz-"]){ [canonicalizedAmzHeaders appendFormat:@"%@:%@\n", keyName, [request valueForHTTPHeaderField:(NSString *)key]]; } } NSString *bucket = @""; NSString *path = request.URL.path; NSString *query = request.URL.query; NSString *host = [request valueForHTTPHeaderField:@"Host"]; if (![host isEqualToString:@"s3.amazonaws.com"]) { bucket = [host substringToIndex:[host rangeOfString:@".s3.amazonaws.com"].location]; } NSString* canonicalizedResource; if (nil == path || path.length < 1) { if ( nil == bucket || bucket.length < 1 ) { canonicalizedResource = @"/"; } else { canonicalizedResource = [NSString stringWithFormat:@"/%@/", bucket]; } } else { canonicalizedResource = [NSString stringWithFormat:@"/%@%@", bucket, path]; } if (query != nil && [query length] > 0) { canonicalizedResource = [canonicalizedResource stringByAppendingFormat:@"?%@", query]; } NSString* stringToSign = [NSString stringWithFormat:@"%@\n%@\n%@\n%@\n%@%@", [request HTTPMethod], contentMd5, contentType, timestamp, canonicalizedAmzHeaders, canonicalizedResource]; NSString *signature = [self signatureForString:stringToSign]; [request setValue:[NSString stringWithFormat:@"AWS %@:%@", self.S3AccessKey, signature] forHTTPHeaderField:@"Authorization"]; Then if I use this line of code: [NSURLConnection connectionWithRequest:request delegate:self]; It works and uploads the file, but if I use: NSURLSessionUploadTask *task = [self.session uploadTaskWithRequest:request fromFile:[NSURL fileURLWithPath:filePath]]; [task resume]; I get the forbidden error..!? Has anyone tried uploading to S3 with this and hit similar issues? I wonder if it is to do with the way the session pauses and resumes uploads, or it is doing something funny to the request..? One possible solution would be to upload the file to an interim server that I control and have that forward it to S3 when it is complete... but this is clearly not an ideal solution! Any help is much appreciated!! Thanks!

    Read the article

  • NHibernate which cache to use for WinForms application

    - by chiccodoro
    I have a C# WinForms application with a database backend (oracle) and use NHibernate for O/R mapping. I would like to reduce communication to the database as much as possible since the network in here is quite slow, so I read about second level caching. I found this quite good introduction, which lists the following available cache implementations. I'm wondering which implementation I should use for my application. The caching should be simple, it should not significantly slow down the first occurrence of a query, and it should not take much memory to load the implementing assemblies. (With NHibernate and Castle, the application already takes up to 80 MB of RAM!) Velocity: uses Microsoft Velocity which is a highly scalable in-memory application cache for all kinds of data. Prevalence: uses Bamboo.Prevalence as the cache provider. Bamboo.Prevalence is a .NET implementation of the object prevalence concept brought to life by Klaus Wuestefeld in Prevayler. Bamboo.Prevalence provides transparent object persistence to deterministic systems targeting the CLR. It offers persistent caching for smart client applications. SysCache: Uses System.Web.Caching.Cache as the cache provider. This means that you can rely on ASP.NET caching feature to understand how it works. SysCache2: Similar to NHibernate.Caches.SysCache, uses ASP.NET cache. This provider also supports SQL dependency-based expiration, meaning that it is possible to configure certain cache regions to automatically expire when the relevant data in the database changes. MemCache: uses memcached; memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Basically a distributed hash table. SharedCache: high-performance, distributed and replicated memory object caching system. See here and here for more info My considerations so far were: Velocity seems quite heavyweight and overkill (the files totally take 467 KB of disk space, haven't measured the RAM it takes so far because I didn't manage to make it run, see below) Prevalence, at least in my first attempt, slowed down my query from ~0.5 secs to ~5 secs, and caching didn't work (see below) SysCache seems to be for ASP.NET, not for winforms. MemCache and SharedCache seem to be for distributed scenarios. Which one would you suggest me to use? There would also be a built-in implementation, which of course is very lightweight, but the referenced article tells me that I "(...) should never use this cache provider for production code but only for testing." Besides the question which fits best into my situation I also faced problems with applying them: Velocity complained that "dcacheClient" tag not specified in the application configuration file. Specify valid tag in configuration file," although I created an app.config file for the assembly and pasted the example from this article. Prevalence, as mentioned above, heavily slowed down my first query, and the next time the exact same query was executed, another select was sent to the database. Maybe I should "externalize" this topic into another post. I will do that if someone tells me it is absolutely unusual that a query is slowed down so much and he needs further details to help me.

    Read the article

  • Solr WordDelimiterFilter + Lucene Highlighter

    - by Lucas
    I am trying to get the Highlighter class from Lucene to work properly with tokens coming from Solr's WordDelimiterFilter. It works 90% of the time, but if the matching text contains a ',' such as "1,500" the output is incorrect: Expected: 'test 1,500 this' Observed: 'test 11,500 this' I am not currently sure whether it is Highlighter messing up the recombination or WordDelimiterFilter messing up the tokenization but something is unhappy. Here are the relevant dependencies from my pom: org.apache.lucene lucene-core 2.9.3 jar compile org.apache.lucene lucene-highlighter 2.9.3 jar compile org.apache.solr solr-core 1.4.0 jar compile And here is a simple JUnit test class demonstrating the problem: package test.lucene; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import java.io.IOException; import java.io.Reader; import java.util.HashMap; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.queryParser.ParseException; import org.apache.lucene.queryParser.QueryParser; import org.apache.lucene.search.Query; import org.apache.lucene.search.highlight.Highlighter; import org.apache.lucene.search.highlight.InvalidTokenOffsetsException; import org.apache.lucene.search.highlight.QueryScorer; import org.apache.lucene.search.highlight.SimpleFragmenter; import org.apache.lucene.search.highlight.SimpleHTMLFormatter; import org.apache.lucene.util.Version; import org.apache.solr.analysis.StandardTokenizerFactory; import org.apache.solr.analysis.WordDelimiterFilterFactory; import org.junit.Test; public class HighlighterTester { private static final String PRE_TAG = "<b>"; private static final String POST_TAG = "</b>"; private static String[] highlightField( Query query, String fieldName, String text ) throws IOException, InvalidTokenOffsetsException { SimpleHTMLFormatter formatter = new SimpleHTMLFormatter( PRE_TAG, POST_TAG ); Highlighter highlighter = new Highlighter( formatter, new QueryScorer( query, fieldName ) ); highlighter.setTextFragmenter( new SimpleFragmenter( Integer.MAX_VALUE ) ); return highlighter.getBestFragments( getAnalyzer(), fieldName, text, 10 ); } private static Analyzer getAnalyzer() { return new Analyzer() { @Override public TokenStream tokenStream( String fieldName, Reader reader ) { // Start with a StandardTokenizer TokenStream stream = new StandardTokenizerFactory().create( reader ); // Chain on a WordDelimiterFilter WordDelimiterFilterFactory wordDelimiterFilterFactory = new WordDelimiterFilterFactory(); HashMap<String, String> arguments = new HashMap<String, String>(); arguments.put( "generateWordParts", "1" ); arguments.put( "generateNumberParts", "1" ); arguments.put( "catenateWords", "1" ); arguments.put( "catenateNumbers", "1" ); arguments.put( "catenateAll", "0" ); wordDelimiterFilterFactory.init( arguments ); return wordDelimiterFilterFactory.create( stream ); } }; } @Test public void TestHighlighter() throws ParseException, IOException, InvalidTokenOffsetsException { String fieldName = "text"; String text = "test 1,500 this"; String queryString = "1500"; String expected = "test " + PRE_TAG + "1,500" + POST_TAG + " this"; QueryParser parser = new QueryParser( Version.LUCENE_29, fieldName, getAnalyzer() ); Query q = parser.parse( queryString ); String[] observed = highlightField( q, fieldName, text ); for ( int i = 0; i < observed.length; i++ ) { System.out.println( "\t" + i + ": '" + observed[i] + "'" ); } if ( observed.length > 0 ) { System.out.println( "Expected: '" + expected + "'\n" + "Observed: '" + observed[0] + "'" ); assertEquals( expected, observed[0] ); } else { assertTrue( "No matches found", false ); } } } Anyone have any ideas or suggestions?

    Read the article

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • Drupal's not reading correct values from DB

    - by John
    Hey Everyone, Here is my current problem. I am working with the chat module and I'm building a module that notifies users via AJAX that they have been invited to a chat. The current table structure for the invites table looks like this: |-------------------------------------------------------------------------| | CCID | NID | INVITER_UID | INVITEE_UID | NOTIFIED | ACCEPTED | |-------------------------------------------------------------------------| | int | int | int | int | (0 or 1) | (0 or 1) | |-------------------------------------------------------------------------| I'm using the periodical updater plug-in for JQuery to continually poll the server to check for invites. When an invite is found, I set the notified from 0 to 1. However, my problem is the periodical updater. When I first see that there is an invite, I notify the user, and set notified to 1. On the next select though, I get the same results before, as if the update didn't work. But, when I got check the database, I can see that it worked just fine. It's as if the query is querying a cache, but I can't figure it out. My code for the periodical updater is as follows: window.onload = function() { var uid = $('a#chat_uid').html(); $.PeriodicalUpdater( '/steelylib/sites/all/modules/_chat_whos_online/ajax/ajax.php', //url to service { method: 'get', //send data via... data: {uid: uid}, //data to send minTimeout: '1000', //min time before server is polled (milli-sec.) maxTimeout: '20000', //max time before server is polled (milli-sec.) multiplyer: '1.5', //multiply against curretn poll time every time constant data is returned type: 'text', //type of data recieved (response type) maxCalls: 0, //max calls to make (0=unlimited) autoStop: 0 //max calls with constant data (0=unlimited/disabled) }, function(data) //callback function { alert( data ); //for now, until i get it working } ); } And my code for the ajax call is as follows: <?php #bootstrap Drupal, and call function, passing current user's uid. function _create_chat_node_check_invites($uid) { cache_clear_all('chatroom_chat_list', 'cache'); $query = "SELECT * FROM {chatroom_chat_invite} WHERE notified=0 AND invitee_uid=%d and accepted=0"; $query_results = db_query( $query, $uid ); $json = '{"invites":['; while( $row = db_fetch_object($query_results) ) { var_dump($row); global $base_url; $url = $base_url . '/content/privatechat' . $uid .'-' . $row->inviter_uid; $inviter = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->inviter_uid ) ); $invitee = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->invitee_uid ) ); #reset table $query = "UPDATE {chatroom_chat_invite} " ."SET notified=1 " ."WHERE inviter_uid=%d AND invitee_uid=%d"; db_query( $query, $row->inviter_uid, $row->invitee_uid ); $json .= '['; $json .= '"' . $url . '",'; $json .= '"' . ($inviter->name) . '",'; $json .= '"' . ($invitee->name) . '"' ; $json .= '],'; } $json = substr($json, 0, -1); $json .= ']}'; return $json; } ?> I can't figure out what is going wrong, any help is greatly appreciated!

    Read the article

  • How to find specific value of the node in xml file

    - by user2735149
    I am making windows phone 8 app based the webservices. This is my xml code: - <response> <timestamp>2013-10-31T08:30:56Z</timestamp> <resultsOffset>0</resultsOffset> <status>success</status> <resultsLimit>8</resultsLimit> <resultsCount>38</resultsCount> - <headlines> - <headlinesItem> <headline>City edge past Toon</headline> <keywords /> <lastModified>2013-10-30T23:45:22Z</lastModified> <audio /> <premium>false</premium> + <links> - <api> - <news> <href>http://api.espn.com/v1/sports/news/1600444?region=GB</href> </news> </api> - <web> <href>http://espnfc.com/uk/en/report/381799/city-edge-toon?ex_cid=espnapi_public</href> </web> - <mobile> <href>http://m.espn.go.com/soccer/gamecast?gameId=381799&lang=EN&ex_cid=espnapi_public</href> </mobile> </links> <type>snReport</type> <related /> <id>1600444</id> <story>Alvardo Negredo and Edin Dzeko struck in extra-time to book Manchester City's place in the last eight of the Capital One Cup, while Costel Pantilimon kept a clean sheet in the 2-0 win to keep the pressure on Joe Hart. </story> <linkText>Newcastle 0-2 Man City</linkText> - <images> - <imagesItem> <height>360</height> <alt>Man City celebrate after Edin Dzeko scored their second extra-time goal at Newcastle.</alt> <width>640</width> <name>Man City celeb Edin Dzeko goal v nufc 20131030 [640x360]</name> <caption>Man City celebrate after Edin Dzeko scored their second extra-time goal at Newcastle.</caption> <type>inline</type> <url>http://espnfc.com/design05/images/2013/1030/mancitycelebedindzekogoalvnufc20131030_640x360.jpg</url> </imagesItem> </images> Code behind: myData = XDocument.Parse(e.Result, LoadOptions.None); var data = myData.Descendants("headlines").FirstOrDefault(); var data1 = from query in myData.Descendants("headlinesItem") select new UpdataNews { News = (string)query.Element("headline").Value, Desc = (string)query.Element("description"), Newsurl = (string)query.Element("links").Element("mobile").Element("href"), Imageurl=(string)query.Element("images").Element("imagesItem").Element("url").Value, }; lstShow.ItemsSource = data1; I am trying to get value from xml tags and assign them to News,Desc, etc. Everything works fine except Imageurl, it shows NullException. I tried same method for Imageurl, i dont know whats going wrong. Help..

    Read the article

  • Need help to properly remove duplicates in NHibernate

    - by Michael D. Kirkpatrick
    Here is the problem I am having. I have a database with over 100 records in it. I am paging through the data to get 9 results at a time. When I added a check to see if items are active, it caused the results to start doubling up. A little background: "Product" is the actual product line "ProductSkus" are the actual products that exist in the product line When there is more then 1 ProductSku within Product, it causes a duplicate entry to be returned. See the NHibernate Query below: result = this.Session.CreateCriteria<Model.Product>() .Add(Expression.Eq("IsActive", true)) .AddOrder(new Order("Name", true)) .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // This part of the query duplicates the products .CreateAlias("ProductSkus", "ProdSkus", JoinType.InnerJoin) .Add(Expression.Eq("ProdSkus.IsActive", true)) .CreateAlias("ProductToSubcategory", "ProdToSubcat") .CreateAlias("ProdToSubcat.ProductSubcategory", "ProdSubcat") .Add(Expression.Eq("ProdSubcat.ID", subCatId)) // This part takes out the duplicate products - Removes too many items... // Turns out that with .SetFirstResult(indexNumber).SetMaxResults(maxNumber) // it gets 9 records back then the duplicates are removed. // Example: // Total Records over 100 // Max = 9 // 4 Duplicates removed // Yields 5 records when there should be 9 // Why??? This line is ran in NHibernate on the data after it has been extracted from the SQL server. .SetResultTransformer(new NHibernate.Transform.DistinctRootEntityResultTransformer()) .List<Model.Product>(); I added the DistinctRootEntityResultTransformer to clean up the duplicates. The problem is that it pulls 9 records back that contains duplicates. DistinctRootEntityResultTransformer then cleans up the duplicates in the 9 records. I am basically needing a distinct statement to be ran on the SQL server to begin with. However, distinct on SQL is not going to work since NHibernate by default wants to add every field from every table in the select part of the statement. I am only using the fields that belong to the root table to begin with (Model.Product). If I can tell NHibernate to not add the fields to the joined tables into the select part of the statement along with adding Distinct, it would work. I use NHibernare Profiler to see the actual query: SELECT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, prodskus1_.ID as ID373_0_, prodskus1_.Description as Descript2_373_0_, prodskus1_.PartNumber as PartNumber373_0_, prodskus1_.Price as Price373_0_, prodskus1_.IsKit as IsKit373_0_, prodskus1_.IsActive as IsActive373_0_, prodskus1_.IsFeaturedProduct as IsFeatur7_373_0_, prodskus1_.DateAdded as DateAdded373_0_, prodskus1_.Weight as Weight373_0_, prodskus1_.TimesViewed as TimesVi10_373_0_, prodskus1_.TimesOrdered as TimesOr11_373_0_, prodskus1_.ProductID as ProductID373_0_, prodskus1_.OverSizedBoxID as OverSiz13_373_0_, prodtosubc2_.ID as ID362_1_, prodtosubc2_.MasterSubcategory as MasterSu2_362_1_, prodtosubc2_.ProductID as ProductID362_1_, prodtosubc2_.ProductSubcategoryID as ProductS4_362_1_, prodsubcat3_.ID as ID352_2_, prodsubcat3_.Name as Name352_2_, prodsubcat3_.ProductCategoryID as ProductC3_352_2_, prodsubcat3_.ImageID as ImageID352_2_, prodsubcat3_.TriggerShow as TriggerS5_352_2_ FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc If I hand modify the query and run it directly on the SQL server I get the result set I want (I removed all the extra fields in the select section and added DISTINCT): SELECT DISTINCT top 9 this_.ID as ID351_3_, this_.Name as Name351_3_, this_.Description as Descript3_351_3_, this_.IsActive as IsActive351_3_, this_.ManufacturerID as Manufact5_351_3_, FROM Product this_ inner join ProductSku prodskus1_ on this_.ID = prodskus1_.ProductID and (prodskus1_.IsActive = 1) inner join ProductToSubcategory prodtosubc2_ on this_.ID = prodtosubc2_.ProductID inner join ProductSubcategory prodsubcat3_ on prodtosubc2_.ProductSubcategoryID = prodsubcat3_.ID WHERE this_.IsActive = 1 /* @p0 */ and prodskus1_.IsActive = 1 /* @p1 */ and prodsubcat3_.ID = 3 /* @p2 */ ORDER BY this_.Name asc The big question I now must ask is... What must I change in the NHibernate Query to ultimately get the exact same result? Thanks in advance.

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >