Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 629/704 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Lucene and Special Characters

    - by Brandon
    I am using Lucene.Net 2.0 to index some fields from a database table. One of the fields is a 'Name' field which allows special characters. When I perform a search, it does not find my document that contains a term with special characters. I index my field as such: Directory DALDirectory = FSDirectory.GetDirectory(@"C:\Indexes\Name", false); Analyzer analyzer = new StandardAnalyzer(); IndexWriter indexWriter = new IndexWriter(DALDirectory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); doc.Add(new Field("Name", "Test (Test)", Field.Store.YES, Field.Index.TOKENIZED)); indexWriter.AddDocument(doc); indexWriter.Optimize(); indexWriter.Close(); And I search doing the following: value = value.Trim().ToLower(); value = QueryParser.Escape(value); Query searchQuery = new TermQuery(new Term(field, value)); Searcher searcher = new IndexSearcher(DALDirectory); TopDocCollector collector = new TopDocCollector(searcher.MaxDoc()); searcher.Search(searchQuery, collector); ScoreDoc[] hits = collector.TopDocs().scoreDocs; If I perform a search for field as 'Name' and value as 'Test', it finds the document. If I perform the same search as 'Name' and value as 'Test (Test)', then it does not find the document. Even more strange, if I remove the QueryParser.Escape line do a search for a GUID (which, of course, contains hyphens) it finds documents where the GUID value matches, but performing the same search with the value as 'Test (Test)' still yields no results. I am unsure what I am doing wrong. I am using the QueryParser.Escape method to escape the special characters and am storing the field and searching by the Lucene.Net's examples. Any thoughts?

    Read the article

  • Autocompleting \cite{} with emacs + auctex gives "cite: no such database entry"

    - by Alejandro Weinstein
    Hi: I am running Emacs 23.1.1 and AucTeX 11.85 in an Ubuntu 8.10 machine. After opening a tex file, the first time I try to use the autocompletion of the \cite{} command, I get "cite: info not available, use `C-c &' to get it." in the minibuffer. After doing the 'C-c &', I get "byte-code: No BibTeX entry with citation key". Subsequent calls to \cite gives me the message "cite: no such database entry" . I have a \bibliography{library} in my tex file, and the \cite{} entries that I did manually work as expected. I have the following in my .emacs (require 'reftex) (setq-default TeX-master nil) (add-hook 'LaTeX-mode-hook 'TeX-PDF-mode) ;turn on pdf-mode. AUCTeX ;will call pdflatex to ;compile instead of latex. (add-hook 'LaTeX-mode-hook 'LaTeX-math-mode) ;turn on math-mode by ;default (add-hook 'LaTeX-mode-hook 'reftex-mode) ;turn on REFTeX mode by ;default (add-hook 'LaTeX-mode-hook 'flyspell-mode) ;turn on flyspell mode by ;default (setq reftex-plug-into-AUCTeX t) (setq TeX-auto-save t) (setq TeX-save-query nil) (setq TeX-parse-self t) (setq-default TeX-master nil) I also tried the suggestions in http://stackoverflow.com/questions/2699017/suggestion-for-cite-in-emacs-with-auctex, but it didn't work either. Alejandro.

    Read the article

  • Reverse ordered list for jquery submitted comments

    - by g-man
    Hey guys I have one more question lol. I am using a script that allows users to submit comments through jquery ajax, however when they are submitted, the submitted comments submit at the bottom of the other comments which are sorted in descending order (newest on top) when the page first loads (due to mysql query). Is there a way to make it submit on top through some sort of sorting javascript function? function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="date">'+mytime+'</span>' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a>:</span>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function success(response, status) { if(status == 'success') { lastTime = response.time; $('#daddy-shoutbox-list').append(prepare(response)); $('input[name=message]').attr('value', '').focus(); $('#list-'+count).fadeIn('slow'); timeoutID = setTimeout(refresh, 3000); } } <div id="daddy-shoutbox"> <ol id="daddy-shoutbox-list"></ol> </div>

    Read the article

  • Zend: Fetching row from session db table after generating session id

    - by Nux
    Hi, I'm trying to update the session table used by Zend_Session_SaveHandler_DbTable directly after authenticating the user and writing the session to the DB. But I can neither update nor fetch the newly inserted row, even though the session id I use to check (Zend_Session::getId()) is valid and the row is indeed inserted into the table. Upon fetching all session ids (on the same request) the one I newly inserted is missing from the results. It does appear in the results if I fetch it with something else. I've checked whether it is a problem with transactions and that does not seem to be the problem - there is no active transaction when I'm fetching the results. I've also tried fetching a few seconds after writing using sleep(), which doesn't help. $auth->getStorage()->write($ident); //sleep(1) $update = $this->db->update('session', array('uid' => $ident->user_id), 'id='.$this->db->quote(Zend_Session::getId())); $qload = 'SELECT id FROM session'; $load = $this->db->fetchAll($qload); echo $qload; print_r($load); $update fails. $load doesn't contain the row that was written with $auth-getStorage()-write($identity). $qload does contain the correct query - copying it to somewhere else leads to the expected result, that is the inserted row is included in the results. Database used is MySQL - InnoDB. If someone knows how to directly fix this (i.e. on the same request, not doing something like updating after redirecting to another page) without modifying Zend_Session_SaveHandler_DbTable: Thank you very much!

    Read the article

  • NTLM Authentication fails when behind Proxy server

    - by Jan Petersen
    Hi All, I've seen a number of post about consuming Web Services from behind a proxy server, but none that seams to address this problem. I'm building a desktop application, using Java, JAX-WS in NetBeans. I have a working prototype, that can query the server for authentication mode, successfully authenticate and retrieve a list of web site. However, if I run the same app from a network that is behind a proxy server (the proxy does not require authentication), then I'm running into trouble. I have sniffed the traffic, and noticed the following: Behind Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx Without Proxy # Result Protocol Host URL 1 200 HTTP host.domain.com /_vti_bin/Authentication.asmx 2 401 HTTP host.domain.com /_vti_bin/Webs.asmx 3 401 HTTP host.domain.com /_vti_bin/Webs.asmx 4 401 HTTP host.domain.com /_vti_bin/Webs.asmx 5 401 HTTP host.domain.com /_vti_bin/Webs.asmx 6 200 HTTP host.domain.com /_vti_bin/Webs.asmx When running the code from a network without a proxy server, I successfully Authentication with the server, but when I'm behind the proxy server, the traffic is cut-off at the 5th message, and thus don't succeed. I know from the Java docs that On Microsoft Windows platforms, NTLM authentication attempts to acquire the user credentials from the system without prompting the user's authenticator object. If these credentials are not accepted by the server then the user's authenticator will be called. Given that my Authentication code is called only ones, and only as the 5th attempt, it appears as if the connection is dropped when behind the proxy server before my Authentication object is used. Is there any way I can control the behavior of Authentication module, to not have it use the system credentials? I have put the source text java class files of a demo app up, showing the issue at the following urls (it's a bit to long even in the short demo form to post here). link text Br Jan

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • nhibernate - mapping with contraints

    - by Tobias Müller
    Hello everybody, I am having a Problem with my nhibernate-mapping and I can't find a solution by searching on stackoverflow/google/documentation. The database I am using has (amongst others) two tables. One is unit with the following fields: id enduring_id starts ends damage_enduring_id [...] The other one is damage, which has the following fields: id enduring_id starts ends [...] The units are assigned to a damage and one damage can have zero, one or more units working on it. Every time a unit moves to annother damage, the dataset is copied. The field "ends" of the old record and "starts" of the new record are set to the current time stamp, enduring_id stays the same. So if I want to know which units were working on a damage at a certain time, I do the following select: select * from unit join damage on damage.enduring_id = unit.damage_enduring_id where unit.starts <= 'time' and unit.ends = 'time' (This is not an actualy query from the database, I made it up to make clear what I mean. The the real database is a little more complex) Now I want to map it that way, so I can load all the damages which are valid at one time (starts <= wanted time <= ends) and that each of them has a Bag with all the attached units at that time (again starts <= wanted time <= ends). Is this possible within the mapping? Sorry if this is a stupid question, but I am pretty new to nhibernate and I have no clue how to do it. Thanks a lot for reading my post! Bye, Tobias

    Read the article

  • Storing Arbitrary Contact Information in Ruby on Rails

    - by Anthony Chivetta
    Hi, I am currently working on a Ruby on Rails app which will function in some ways like a site-specific social networking site. As part of this, each user on the site will have a profile where they can fill in their contact information (phone numbers, addresses, email addresses, employer, etc.). A simple solution to modeling this would be to have a database column per piece of information I allow users to enter. However, this seems arbitrary and limited. Further, to support allowing users to enter as many phone numbers as they would like requires the addition of another database table and joins. It seems to me that a better solution would be to serialize all the contact information entered by a user into a single field in their row. Since I will never be conditioning a SQL query on this information, such a solution wouldn't be any less efficient. Ideally, I would like to use a vCard as my serialization format. vCards are the standard solution to storing contact information across the web, and reusing tested solutions is a Good Thing. Alternative serialization formats would include simply marshaling a ruby hash, or YAML. Regardless of serialization format, supporting the reading and updating of this information in a rails-like way seems to be a major implementation challenge. So, here's the question: Has anyone seen this approach used in a rails application? Are there any rails plugins or gems that make such a system easy to implement? Ideally what I would like is an acts_as_vcard to add to my model object that would handle editing the vcard for me and saving it back to the database.

    Read the article

  • SQL Joins Excluding Data

    - by Andrew
    Say I have three tables: Fruit (Table 1) ------ Apple Orange Pear Banana Produce Store A (Table 2 - 2 columns: Fruit for sale => Price) ------------------------- Apple => 1.00 Orange => 1.50 Pear => 2.00 Produce Store B (Table 3 - 2 columns: Fruit for sale => Price) ------------------------ Apple => 1.10 Pear => 2.50 Banana => 1.00 If I would like to write a query with Column 1: the set of fruit offered at Produce Store A UNION Produce Store B, Column 2: Price of the fruit at Produce Store A (or null if that fruit is not offered), Column 3: Price of the fruit at Produce Store B (or null if that fruit is not offered), how would I go about joining the tables? I am facing a similar problem (with more complex tables), and no matter what I try, if the "fruit" is not at "produce store a" but is at "produce store b", it is excluded (since I am joining produce store a first). I have even written a subquery to generate a full list of fruits, then left join Produce Store A, but it is still eliminating the fruits not offered at A. Any Ideas?

    Read the article

  • multi-part identifier could not be bound error

    - by vishal Shah
    Here is my query: IF OBJECT_ID('NPWAS1513.dbo.usp_MSPEX_QLK_Billing_Fact_Load') IS NOT NULL DROP PROCEDURE dbo.usp_MSPEX_QLK_Billing_Fact_Load; GO CREATE PROCEDURE usp_MSPEX_QLK_Billing_Fact_Load @create_timestamp datetime, @update_timestamp datetime, @create_user varchar(50), @update_user varchar(50), @dbProdServ varchar(50) AS print 'dbProdServ is:'+ @dbProdServ; print 'current_user is:' +@current_user; DECLARE @sSQL AS VARCHAR(MAX); SET @sSQL = '' SET @sSQL = 'set identity_insert ' + @dbProdServ + '.mspex_qlk_billing_fact ON' EXEC(@sSQL); SET @sSQL = 'INSERT INTO ' + @dbProdServ +'.mspex_qlk_billing_fact (project_id, billing_year, billing_month, billing_month_desc, billing_date_id, projected_bill_amount, billed_year_to_date_amount, billed_inception_to_date_amount, remaining_bill_amount, actual_billed_amount, current_billing_percent, previous_billing_percent, billing_pct_diff, billing_type, final_bill_ind, last_in_progress_date, current_record_ind, load_time_stamp, total_contract_period, contract_period_current_year, partial_bill, create_timestamp, create_user, update_timestamp, update_user) SELECT project_dim.project_id, billing_final_data.billingyear, billing_final_data.billingmonth, billing_final_data.billingmonthdesc, Time_Dim.Time_ID, billing_final_data.projected_bill_amount, billing_final_data.billed_year_todate_amount, billing_final_data.billed_inception_todate_amount, billing_final_data.remaining_bill_amount, billing_final_data.actual_billed_amount, billing_final_data.current_billing_percent, billing_final_data.previous_billing_percent, billing_final_data.billing_pct_diff, billing_final_data.billing_type, billing_final_data.final_bill_ind, billing_final_data.last_in_progress_date, billing_final_data.current_record_ind, billing_final_data.load_time_stamp, billing_final_data.[Total Contract Period], billing_final_data.[Contract Period Current Year], billing_final_data.[Partial Bill],'''+ CAST(@create_timestamp as varchar(max)) + ''',''' + @create_user + ''','''+ CAST(@update_timestamp as varchar(50)) +''','''+ @update_user + ''' FROM '+ @dbProdServ +'.mspex_qlk_project_dim project_dim,'+ @dbProdServ +'.mspex_rpt_billing_final_data billing_final_data,'+ @dbProdServ + '.MSPEX_QLK_Time_Dim Time_Dim WHERE project_dim.myproject_project_uid = billing_final_data.projectuid AND'''+ convert(datetime, cast(billing_final_data.[BillingMonth] as nvarchar(2)) + '''/01/''' + cast(billing_final_data.[billingyear] as nvarchar(4)), 101) +''' + = Time_Dim.Time_Date'; BEGIN TRANSACTION EXEC(@sSQL) COMMIT TRANSACTION I get the error msg: Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "mspex_rpt_billing_final_data.BillingMonth" could not be bound. Msg 4104, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 23 The multi-part identifier "billing_final_data.billingyear" could not be bound. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 83 Invalid column name 'BillingMonth'. Msg 207, Level 16, State 1, Procedure usp_MSPEX_QLK_Billing_Fact_Load, Line 84 Invalid column name 'billingyear'. I checked the column names, etc. and things are fine. In fact, I directly dragged the table name and column name to ensure that it is correct. Please help ASAP. Calling me at cell at 630-338-9427 would be great. But an URGENT response is absolutely necessary. Thanks guys.

    Read the article

  • How do I sort an internationalized i18n table with symfony and doctrine?

    - by Maurizio
    I would like to display a list of records from an internationalized table using sfDoctrinePager. Not all the records have been translated to all the languages supported by the application, so I had to implement a fallback mechanism for some fields (by overriding the getFoo() function in the Bar.class.php, as explained in another post here). I have different fallback list for each culture. Everything works fine until when it comes to sorting the records in alphabetical order. I'm sorting the records at the SQL (Dql) level, by adding an -orderBy('t.name') to the query: $q = Doctrine::getTable('Foo') ->createQuery('f') ->leftJoin('f.Translation t') ->orderBy('t.name') But here come the troubles: the list gets not sorted correctly, regardless of the active culture. I get rather better results when I limit the translations to the active culture, like this: ->leftJoin('f.Translation t WITH lang = ?', $request->getParameter('sf_culture'); Then the sorting is correct, as far as all the translations exist for the active culture. If a translation does not exist and I have to take the name from the fallback language, the record will be displayed at the very beginning of the list (I understand this happens because the value for the current culture is null). My question is: is there a best practice for getting internationalized fields (needing fallbacks) sorted correctly with doctrine and sfDoctrinePager? Thank you in advance.

    Read the article

  • How to setup Lucene search for a B2B web app?

    - by Bill Paetzke
    Given: 5000 databases (spread out over 5 servers) 1 database per client (so you can infer there are 1000 clients) 2 to 2000 users per client (let's say avg is 100 users per client) Clients (databases) come and go every day (let's assume most remain for at least one year) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. His situation is quite similar to mine. Although, he didn't elaborate on the setup (particularly indices); hence, the need for this question. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • Help converting PHP eregi to preg_match

    - by Jason
    Hi all, I am wondering if someone could please help me convert a piece of PHP code that is now deprecated. Here is the single line I am trying to convert: if(eregi(trim ($request_url_handler[$x]),$this->sys_request_url) && $this->id_found == 0){ It is part of a function that return the configuration settings for a website. Below is the whole function. // GET CORRECT CONFIG FROM DATABASE function get_config($db) { global $tbl_prefix; $db->query("SELECT cid,urls FROM ".$tbl_prefix."sys_config ORDER BY cid"); while($db->next_record()){ $request_url_handler = explode("\n",$db->f("urls")); if(empty($request_url_handler[0])) { $request_url_handler[0] = "@"; $this->id_found = 2; } for($x=0; $x<count($request_url_handler); $x++) { if(empty($request_url_handler[$x])) { $request_url_handler[$x] = "@"; } if(eregi(trim($request_url_handler[$x]),$this->sys_request_url) && $this->id_found == 0) { $this->set_config($db,$db->f("cid")); $this->id_found = 1; } } if($this->id_found == 1) { return($this->sys_config_vars); } } $this->set_config($db,""); return($this->sys_config_vars); } Any help would be great ly appreciated. I only found the the eregi function was deprecated since I updated XAMPP to 1.7.3.

    Read the article

  • Something preventing default (explorer-style) CListCtrl jump to line behaviour

    - by andywebsdale
    I'm maintaining an MFC/COM/ATL 45-odd project solution originally written with VS6. I'm now using VS2008. I'm looking at a list control(vanilla CListCtrl) which doesn't behave as we think it should. Normally in an MFC list control you can press a key, (Q say) & selection will jump down to the 1st line beginning with 'Q' (like windows explorer). Does anyone have an idea as to why this might not happen? The styles/Extended styles are set the same as another control in the same project, which DOES work OK. Do I have to send my own message, or is there a flag, etc. that controls that functionality or what? Normally Google would supply the answer, but I haven't been able to frame my query correctly to come up with the relevant info this time. Here's the line from the .rc file for the CListCtrl that doesn't jump to a line on keypress: CONTROL "List1",IDC_BAL_LIST,"SysListView32",LVS_REPORT | LVS_SHOWSELALWAYS | WS_BORDER | WS_TABSTOP,0,73,493,230 And here's a line from the same .rc file showing a list control that does do that: CONTROL "List1",IDC_LIST1,"SysListView32",LVS_REPORT | LVS_SHOWSELALWAYS | WS_BORDER | WS_TABSTOP,1,38,501,219 As you can see, there's no obvious difference to the properties, so what program code would affect it?

    Read the article

  • Detecting a Dispose() from an exception inside using block

    - by Augusto Radtke
    I have the following code in my application: using (var database = new Database()) { var poll = // Some database query code. foreach (Question question in poll.Questions) { foreach (Answer answer in question.Answers) { database.Remove(answer); } // This is a sample line that simulate an error. throw new Exception("deu pau"); database.Remove(question); } database.Remove(poll); } This code triggers the Database class Dispose() method as usual, and this method automatically commits the transaction to the database, but this leaves my database in an inconsistent state as the answers are erased but the question and the poll are not. There is any way that I can detect in the Dispose() method that it being called because of an exception instead of regular end of the closing block, so I can automate the rollback? I don´t want to manually add a try ... catch block, my objective is to use the using block as a logical safe transaction manager, so it commits to the database if the execution was clean or rollbacks if any exception occured. Do you have some thoughts on that?

    Read the article

  • Embedded non-relational (nosql) data store

    - by Igor Brejc
    I'm thinking about using/implementing some kind of an embedded key-value (or document) store for my Windows desktop application. I want to be able to store various types of data (GPS tracks would be one example) and of course be able to query this data. The amount of data would be such that it couldn't all be loaded into memory at the same time. I'm thinking about using sqlite as a storage engine for a key-value store, something like y-serial, but written in .NET. I've also read about FriendFeed's usage of MySQL to store schema-less data, which is a good pointer on how to use RDBMS for non-relational data. sqlite seems to be a good option because of its simplicity, portability and library size. My question is whether there are any other options for an embedded non-relational store? It doesn't need to be distributable and it doesn't have to support transactions, but it does have to be accessible from .NET and it should have a small download size. UPDATE: I've found an article titled SQLite as a Key-Value Database which compares sqlite with Berkeley DB, which is an embedded key-value store library.

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • How to verify the SSL connection when calling an URI?

    - by robertokl
    Hello, I am developing an web application that is authenticated using CAS (A single-sign-on solution: http://www.ja-sig.org/wiki/display/CAS/Home). For security reasons, I need two things to work: The communication between CAS and my application needs to be secure My application needs to accept the certification coming with CAS, so that I can guarantee that the CAS responding is the real CAS Server. This is what I got so far: uri = URI.parse("https://www.google.com/accounts") https = Net::HTTP.new(uri.host, uri.port) https.use_ssl = (uri.scheme == 'https') https.verify_mode = (OpenSSL::SSL::VERIFY_PEER) raw_res = https.start do |conn| conn.get("#{uri.path}?#{uri.query}") end This works just great in my Mac OSX. When I try to reach an insecure uri, it raises an exception, and when I try to reach a secure uri, it allow me normally, just like expected. The problem starts when I deploy my application on my Linux server. I tried in both Ubuntu and Red Hat. Independing of what uri I try to reach, it always raises me this exception: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed from /usr/local/lib/ruby/1.8/net/http.rb:586:in `connect' from /usr/local/lib/ruby/1.8/net/http.rb:586:in `connect' from /usr/local/lib/ruby/1.8/net/http.rb:553:in `do_start' from /usr/local/lib/ruby/1.8/net/http.rb:542:in `start' from (irb):7 I think this have something to do with my installed OpenSSL package, but I can't be sure. This are my installed OpenSSL packages: openssl.x86_64 0.9.8e-12.el5 installed openssl-devel.x86_64 0.9.8e-12.el5 installed I tried using HTTParty as well, but it just ignores the SSL certificated. I hope someone can help me, either by telling me a gem that works the way I need. Thanks.

    Read the article

  • NHibernate: how to handle entity-based validation using session-per-request pattern, without control

    - by Seth Petry-Johnson
    What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using: Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance. Load the entity with potentially invalid data, coming from the form post. Validate the entity by callings its IsValid() method. If valid, call IFooRepository.Save(entity). Otherwise, display error message. The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid. What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request? Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed. Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?") Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end. Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1] Con: I have to call IRepository.Save(entity)' and SaveChanges(). Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them. Pro: Most intuitive, given that controllers don't know about NH. Con: Does this defeat many of the benefits I'd get from NH?

    Read the article

  • Alternative to sql NOT IN?

    - by Alex
    Hi, I am trying to make a materialized view in Oracle (I am a newbie, btw). For some reason, it doesn't like the presence of sub-query in it. I've been trying to use LEFT OUTER JOIN instead, but it's returning different data set now. Put simply, here's the code I'm trying to modify: SELECT * FROM table1 ros, table2 bal, table3 flx WHERE flx.name = 'XXX' AND flx.value = bal.value AND NVL (ros.ret, 'D') = Nvl (flx.attr16, 'D') AND ros.value = bal.segment3 AND ros.type IN ( 'AL', 'AS', 'PL' ) AND bal.period = 13 AND bal.code NOT IN (SELECT bal1.code FROM table2 bal1 WHERE bal1.value = flx.value AND bal1.segment3 = ros.value AND bal1.flag = bal.flag AND bal1.period = 12 AND bal1.year = bal.year) And here's one of my attempt: SELECT * FROM table1 ros, table2 bal, table3 flx LEFT OUTER JOIN table2 bal1 ON bal.code = bal1.code WHERE bal1.code is null AND bal1.segment3 = ros.value AND bal.segment3 = ros.value AND bal1.flag = bal.flag AND bal1.year = bal.year AND flx.name = 'XXX' AND flx.value = bal.value AND bal1.value = flx.value AND bal1.period_num = 12 AND NVL (ros.type, 'D') = NVL (flx.attr16, 'D') AND ros.value = bal.segment3 AND ros.type IN ( 'AL', 'AS', 'PL' ) AND bal.period = 13; This drives me nuts! Thanks in advance for the help :)

    Read the article

  • Yahoo BOSS Question

    - by Fincha
    Hello everyone, I wonna to echo totalresults but somethink is wrong. // Get search results from Yahoo BOSS as an XML* $API = 'http://boss.yahooapis.com/ysearch/web/v1/'; $request = $API . $query .'?format=xml&appid='. APP_ID.'&start='.$start."0"; $ch = curl_init($request); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 0); $xml = simplexml_load_string(curl_exec($ch)); echo $xml->resultset_web->totalhits; // Display search results - Title, Date and URL. foreach ($xml->resultset_web->result as $result) { $ausgabe .= '<a href="'.$result->clickurl.'">'.$result->title.'</a><br />'; $ausgabe .= $result->abstract."<br>"; $ausgabe .= '<a href="'.$result->clickurl.'">'.$result->url."</a> - ".round(($result->size/1024), 2)." Kb<br><br>"; } Can someone help me

    Read the article

  • Use of var keyword in C#

    - by kronoz
    After discussion with colleagues regarding the use of the 'var' keyword in C# 3 I wondered what people's opinions were on the appropriate uses of type inference via var? For example I rather lazily used var in questionable circumstances, e.g.:- foreach(var item in someList) { // ... } // Type of 'item' not clear. var something = someObject.SomeProperty; // Type of 'something' not clear. var something = someMethod(); // Type of 'something' not clear. More legitimate uses of var are as follows:- var l = new List<string>(); // Obvious what l will be. var s = new SomeClass(); // Obvious what s will be. Interestingly LINQ seems to be a bit of a grey area, e.g.:- var results = from r in dataContext.SomeTable select r; // Not *entirely clear* what results will be here. It's clear what results will be in that it will be a type which implements IEnumerable, however it isn't entirely obvious in the same way a var declaring a new object is. It's even worse when it comes to LINQ to objects, e.g.:- var results = from item in someList where item != 3 select item; This is no better than the equivilent foreach(var item in someList) { // ... } equivilent. There is a real concern about type safety here - for example if we were to place the results of that query into an overloaded method that accepted IEnumerable<int> and IEnumerable<double> the caller might inadvertently pass in the wrong type. Edit - var does maintain strong typing but the question is really whether it's dangerous for the type to not be immediately apparent on definition, something which is magnified when overloads mean compiler errors might not be issued when you unintentionally pass the wrong type to a method. Related Question: http://stackoverflow.com/questions/633474/c-do-you-use-var

    Read the article

  • C# HTML tags inside GridView

    - by Geo Ego
    I have a SharePoint web part that I wrote in C# which is used to display SQL Server data based on user selections. I pull the data with a DataReader, fill a DataSet with it, and set that DataSet as the DataSource in a GridView and add that control to my page. The only problem I'm facing is that line breaks in the data fields are not being rendered at all. I'm simply getting a block of text that ignores the breaks that are present in the database when it's rendered to HTML. I tried a regex: Regex rgx = new Regex("\n"); string inputStr = Convert.ToString(dr[x]); string outputStr = rgx.Replace(inputStr, "<br />"); newRow[ds3.Tables["Bobst Specs 3"].Columns[x]] = outputStr; While that does detect and replace the newlines, I merely get the text "" with no line breaks. I also tried changing my SQL query to something along the lines of: SELECT REPLACE (fldCustomerName, '. ', '.' + @NewLineChar) This apparently renders more newlines. I can see that they are present, because if I also insert the regex they are affected, but do not create line breaks. I'm not sure how to replace these, and what with, to get the lines to actually break.

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >