Search Results

Search found 27327 results on 1094 pages for 'search results'.

Page 200/1094 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • rails ajax redirect

    - by badnaam
    Here is my use case I have a search model with two actions search_set and search_show. 1 - A user loads the home page which contains a search_form, rendered via a partial (search_form). 2 - User does a search, and the request goes to search_set, the search is saved and a redirect happens to search_show page which again renders the search_form with the saved search preferences. This search form is different than the one if step1, because it's a remote form being submitted to the same action (search set) 3 - Now the user does another search, and the search form is submitted via ajax to the search_set action. The search is saved and executed and now I need to present the result via rjs templates (corresponding to search_show). I am told that if the request is xhr then I can't redirect to the search_show action? Is that right? If yes, how do I handle this? Here is my controller class http://pastie.org/993460 Thanks

    Read the article

  • How do I return JSON data from a partial view FormMethod.Get?

    - by MrM
    I have the following code that posts to my Search Json. The problem is the url redirects to the json search and displays the raw json data. I would like to return to a table in my partialView instead. Any thoughts on how I can achieve this? <div> @using (Html.BeginForm("Search", "Home", Formmethod.Get, new {id="search-form"})){ ... <button id="search-btn">Search</button> } </div> <div> <table id="search-results">...</table> </div> My home controller works fine but to make sure the picture is clear... public JsonResult Search(/*variables*/) { ... return Json(response, JsonRequestBehavior.AllowGet); } And I get redirected to "Search/(all my variables)

    Read the article

  • How to reset input field in the iframe?

    - by user244394
    Hi! I have a search input field on the header of the page and when searched the result are displayed in an iframe, which also has the search input box on the iframe. Problem is how do I go about reseting the input value in the iframe search input field? It seem to remember the old search from the header, so when you do a search again in the iframe it display the last search, instead of the new search keyword. You have to refresh or clear the search again to make the search work correctly. Thanks in advance

    Read the article

  • TableView reloading problem

    - by abdulsamad
    Hi All, i am in a very serious problem of tableview reloading. i have two option in my tableview 1) show next 10 Results 2) show previos 10 results I wanna show only 50 results at a time in my table view. when table is populated with 50 results after that on pressing "show next 10 results" the results from 1-10 are removed and results from 51-60 are now displayed and hence i have the total results in my table view from 11-60. As the total # of results remain 50 hence i have the cells with their identifier from 1-50 and now they are not nil. So i also have to code in else block of the condition if(cell == nil) { } else{ Have to write the same code as in if. } Now my cells are properly reloading on clicking "show next 10" and on clicking "show previous 10" but the problem is that because of the code in else block my cells are also reloaded when the user scrolls the table-view. Kindly give some good suggestion what should i do to get out from this situation.

    Read the article

  • Add the Recycle Bin to Start Menu in Windows 7

    - by Matthew Guay
    Have you ever tried to open the Recycle Bin by searching for “recycle bin” in the Start menu search, only to find nothing?  Here’s a quick trick that will let you find the Recycle Bin directly from your Windows Start menu search. The Start menu search may be the best timesaver ever added to Windows.  In fact, we use it so much that it seems painful to manually search for a program when using Windows XP or older versions of Windows.  You can easily find files, folders, programs and more through the Start menu search in both Vista and Windows 7. However, one thing you cannot find is the recycle bin; if you enter this in the start menu search it will not find it. Here’s how to add the Recycle Bin to your Start menu search. What to do To access the Recycle Bin from the Start menu search, we need to add a shortcut to the start menu.  Windows includes a personal Start menu folder, and an All Users start menu folder which all users on the computer can see.  This trick only works in the personal Start menu folder. Open up an Explorer window (Simply click the Computer link in the start menu), click the white part of the address bar, and, enter the following (substitute your username for your_user_name) and hit Enter. C:\Users\your_user_name\AppData\Roaming\Microsoft\Windows\Start Menu Now, right-click in the folder, select New, and then click Shortcut. In the location box, enter the following: explorer.exe shell:RecycleBinFolder When you’ve done this, click Next. Now, enter a name for the shortcut.  You can enter Recycle Bin like the standard shortcut, or you could name it something else such as Trash…if that’s easier for you to remember.  Click Finish when your done. By default it will have a folder icon.  Let’s switch that to the standard Recycle Bin icon.  Right-click on the new shortcut and click Properties. Click Change Icon… Type the following in the “Look for icons in this file:” box, and press the Enter key on your keyboard: %SystemRoot%\system32\imageres.dll Now, scroll and find the Recycle Bin icon and click Ok. Click Ok in the previous dialog, and now your Recycle Bin shortcut has the correct icon.   You can even have multiple shortcuts with different names, so when you searched either Recycle Bin or Trash it would come up in the Start menu.  To do that, simply repeat these directions, and enter another name of your choice at the prompt.  Here we have both a Recycle Bin and a Trash icon. Now, when you enter Recycle Bin (or trash, depending on what you chose) in your Start menu search, you will see it at the top of your Start menu.  Simply press Enter or click on the icon to open the Recycle Bin.   This trick will work in Windows Vista too!  Simply follow these same directions, and you can add the Recycle Bin to your Vista Start menu and find it via search. This is a simple trick, but may make it  much easier for you to open your Recycle Bin directly from your Windows Vista or 7 Start menu search.  If you’re using Windows 7, you can also check out our directions on how to Add the Recycle Bin to the Taskbar in Windows 7. Similar Articles Productive Geek Tips Hide, Delete, or Destroy the Recycle Bin Icon in Windows 7 or VistaDisable Deletion of the Recycle Bin in Windows VistaHide the Recycle Bin Icon Text on Windows VistaAdd the Recycle Bin to the Taskbar in Windows 7Resize the Recycle Bin in XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab Discover New Bundled Feeds in Google Reader

    Read the article

  • OSB and Coherence Integration

    - by mark.ms.smith
    Anyone who has tried to manage Coherence nodes or tried to cache results in OSB, will appreciate the new functionality now available. As of WebLogic Server 10.3.4, you can use the WebLogic Administration Server, via the Administration Console or WLST, and java-based Node Manager to manage and monitor the life cycle of stand-alone Coherence cache servers. This is a great step forward as the previous options mainly involved writing your own scripts to do this. You can find an excellent description of how this works at James Bayer’s blog. You can also find the WebLogic documentation here.As of Oracle Service Bus 11gR1 (11.1.1.3.0), OSB now supports service result caching for Business Bervices with Coherence. If you use Business Services that return somewhat static results that do not change often, you can configure those Business Services to cache results. For Business Services that use result caching, you can control the time to live for the cached result. After the cached result expires, the next Business Service call results in invoking the back-end service to get the result. This result is then stored in the cache for future requests to access. I’m thinking that this caching functionality would be perfect for some sort of cross reference data that was refreshed nightly by batch. You can find the OSB Business Service documentation here.Result Caching in a dedicated JVMThis example demonstrates these new features by configuring a OSB Business Service to cache results in a separate Coherence JVM managed by WebLogic. The reason why you may want to use a separate, dedicated JVM is that the result cache data could potentially be quite large and you may want to protect your OSB java heap.In this example, the client will call an OSB Proxy Service to get Employee data based on an Employee Id. Using a Business Service, OSB calls an external system. The results are automatically cached and when called again, the respective results are retrieved from the cache rather than the external system.Step 1 – Set up your Coherence Server Via the OSB Administration Server Console, create your Coherence Server to be used as the results cache.Here are the configured Coherence Server arguments from the Server Start tab. Note that I’m using the default Cache Config and Override files in the domain.-Xms256m -Xmx512m -XX:PermSize=128m -XX:MaxPermSize=256m -Dtangosol.coherence.override=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-override.xml -Dtangosol.coherence.cluster=OSB-cluster -Dtangosol.coherence.cacheconfig=/app/middleware/jdev_11.1.1.4/user_projects/domains/osb_domain2/config/osb/coherence/osb-coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true -Dcom.sun.management.jmxremote Just incase you need it, here is my Coherence Server classpath:/app/middleware/jdev_11.1.1.4/oracle_common/modules/oracle.coherence_3.6/coherence.jar: /app/middleware/jdev_11.1.1.4/modules/features/weblogic.server.modules.coherence.server_10.3.4.0.jar: /app/middleware/jdev_11.1.1.4/oracle_osb/lib/osb-coherence-client.jarBy default, OSB will try and create a local result cache instance. You need to disable this by adding the following JVM parameters to each of the OSB Managed Servers:-Dtangosol.coherence.distributed.localstorage=false -DOSB.coherence.cluster=OSB-clusterIf you need more information on configuring a remote result cache, have a look at the configuration documentration under the heading Using an Out-of-Process Coherence Cache Server.Step 2 – Configure your Business Service Under the respective Business Service Message Handling Configuration (Advanced Properties), you need to enable “Result Caching”. Additionally, you need to determine what the cache data will be keyed on. In the example below, I’m keying it on the unique Employee Id.The Results As this test was on my laptop, the actual timings are just an indication that there is a benefit to caching results. Using my test harness, I sent 10,000 requests to OSB, all with the same Employee Id. In this case, I had result caching disabled.You can see that this caused the back end Business Service (BS_GetEmployeeData) to be called for each request. Then after enabling result caching, I sent the same number of identical requests.You can now see the Business Service was only invoked once on the first request. All subsequent requests used the Results Cache.

    Read the article

  • Perl CGI that sends a temporary loading page to client then later sends the actual results page

    - by Kurt W. Leucht
    I've wasted at least a half day of my company's time searching the Internet for an answer and I'm getting wrapped around the axle here. I can't figure out the difference between all the different technology choices (long polling, ajax streaming, comet, XMPP, etc.) and I can't get a simple hello world example working on my PC. I am running Apache 2.2 and ActivePerl 5.10.0. JavaScript is completely acceptable for this solution. All I want to do is write a simple Perl CGI script that when accessed, it immediately returns some HTML that tells the user to wait or maybe sends an animated GIF. Then without any user intervention (no mouse clicks or anything) I want the CGI script to at some time later replace the wait message or the animated GIF with the actual HTML results from their query. I know this is simple stuff and websites do it all the time, but I can't find a single working example that I can cut and paste onto my machine that will work. Here is my simple Hello World example that I've compiled from various Internet sources, but it doesn't seem to work. When I refresh this CGI URL in my web browser it prints nothing for 5 seconds, then it prints the PLEASE BE PATIENT web page, but not the results web page. What am I doing wrong? #!C:\Perl\bin\perl.exe use CGI; use CGI::Carp qw/fatalsToBrowser warningsToBrowser/; sub Create_HTML { my $html = <<EOHTML; <html> <head> <meta http-equiv="pragma" content="no-cache" /> <meta http-equiv="expires" content="-1" /> <script type="text/javascript" > var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) // JScript gives us Conditional compilation, we can cope with old IE versions. // and security blocked creation of the objects. try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest!='undefined') { try { xmlhttp = new XMLHttpRequest(); } catch (e) { xmlhttp=false; } } if (!xmlhttp && window.createRequest) { try { xmlhttp = window.createRequest(); } catch (e) { xmlhttp=false; } } </script> <title>Ajax Streaming Connection Demo</title> </head> <body> Some header text. <p> <div id="response">PLEASE BE PATIENT</div> <p> Some footer text. </body> </html> EOHTML return $html; } my $cgi = new CGI; print $cgi->header; print Create_HTML(); sleep(5); print "<script type=\"text/javascript\">\n"; print "\$('response').innerHTML = 'Here are your results!';\n"; print "</script>\n";

    Read the article

  • Mysql query help - Alter this mysql query to get these results?

    - by sandeepan-nath
    Please execute the following queries first to set up so that you can help me:- CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) NOT NULL auto_increment, `firstname` varchar(100) NOT NULL default '', `surname` varchar(155) NOT NULL default '', PRIMARY KEY (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=41 ; INSERT INTO `Tutor_Details` (`id_tutor`,`firstname`, `surname`) VALUES (1, 'Sandeepan', 'Nath'), (2, 'Bob', 'Cratchit'); CREATE TABLE IF NOT EXISTS `Classes` ( `id_class` int(10) unsigned NOT NULL auto_increment, `id_tutor` int(10) unsigned NOT NULL default '0', `class_name` varchar(255) default NULL, PRIMARY KEY (`id_class`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=229 ; INSERT INTO `Classes` (`id_class`,`class_name`, `id_tutor`) VALUES (1, 'My Class', 1), (2, 'Sandeepan Class', 2); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Bob'), (6, 'Class'), (2, 'Cratchit'), (4, 'Nath'), (3, 'Sandeepan'), (5, 'My'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (3, 1), (4, 1), (1, 2), (2, 2); CREATE TABLE IF NOT EXISTS `Class_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_class` int(10) default NULL, `id_tutor` int(10) NOT NULL, KEY `Class_Tag_Relations` (`id_tag`), KEY `id_class` (`id_class`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Class_Tag_Relations` (`id_tag`, `id_class`, `id_tutor`) VALUES (5, 1, 1), (6, 1, 1), (3, 2, 2), (6, 2, 2); In the present system data which I have given , tutor named "Sandeepan Nath" has has created class named "My Class" and tutor named "Bob Cratchit" has created class named "Sandeepan Class". Requirement - To execute a single query with limit on the results to show search results as per AND logic on the search keywords like this:- If "Sandeepan Class" is searched , Tutor Sandeepan Nath's record from Tutor Details table is returned(because "Sandeepan" is the firstname of Sandeepan Nath and Class is present in class name of Sandeepan's class) If "Class" is searched Both the tutors from the Tutor_details table are fetched because Class is present in the name of the class created by both the tutors. Following is what I have so far achieved (PHP Mysql):- <?php $searchTerm1 = "Sandeepan"; $searchTerm2 = "Class"; mysql_select_db("test"); $sql = "SELECT td.* FROM Tutor_Details AS td LEFT JOIN Tutors_Tag_Relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN Classes AS wc ON td.id_tutor = wc.id_tutor LEFT JOIN Class_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN Tags as t1 on ((t1.id_tag = ttagrels.id_tag) OR (t1.id_tag = wtagrels.id_tag)) LEFT JOIN Tags as t2 on ((t2.id_tag = ttagrels.id_tag) OR (t2.id_tag = wtagrels.id_tag)) where t1.tag LIKE '%".$searchTerm1."%' AND t2.tag LIKE '%".$searchTerm2."%' GROUP BY td.id_tutor LIMIT 10 "; $result = mysql_query($sql); echo $sql; if($result) { while($rec = mysql_fetch_object($result)) $recs[] = $rec; //$rec = mysql_fetch_object($result); echo "<br><br>"; if(is_array($recs)) { foreach($recs as $each) { print_r($each); echo "<br>"; } } } ?> But the results are :- If "Sandeepan Nath" is searched, it does not return any tutor (instead of only Sandeepan's row) If "Sandeepan Class" is searched, it returns Sandeepan's row (instead of Both tutors ) If "Bob Class" is searched, it correctly returns Bob's row If "Bob Cratchit" is searched, it does not return any tutor (instead of only

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • OutOfMemoryError: Java heap space error when start solr

    - by Hamid
    Hi I start indexing DB articles with solr, but after add about 58 million article (and about 113 GB size of disk) , i get below error message on tomcat log error Note1: i already set Init memory pool to 256MB, and Max memory pool:1400MB to tomcat server. Note2: I can post or search article but must wait over 3 min for get response. 8-apr-2010 14:27:07 org.apache.solr.common.SolrException log SEVERE: java.lang.OutOfMemoryError: Java heap space at org.apache.lucene.util.PriorityQueue.initialize(PriorityQueue.java:89) at org.apache.lucene.search.HitQueue.<init>(HitQueue.java:67) at org.apache.lucene.search.TopScoreDocCollector.<init>(TopScoreDocCollector.java:113) at org.apache.lucene.search.TopScoreDocCollector.<init>(TopScoreDocCollector.java:37) at org.apache.lucene.search.TopScoreDocCollector$InOrderTopScoreDocCollector.<init>(TopScoreDocCollector.java:42) at org.apache.lucene.search.TopScoreDocCollector$InOrderTopScoreDocCollector.<init>(TopScoreDocCollector.java:40) at org.apache.lucene.search.TopScoreDocCollector.create(TopScoreDocCollector.java:100) at org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:979) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:884) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:341) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:182) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:574) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1527) at java.lang.Thread.run(Unknown Source) What's problem ? Have any suggestion ? Thanks in advanced

    Read the article

  • Regex query: how can I search PDFs for a phrase where words in that phrase appear on more than one l

    - by Alison
    I am trying to set up an index page for the weekly magazine I work on. It is to show readers the names of companies mentioned in that weeks' issue, plus the page numbers they are appear on. I want to search all the PDF files for the week, where one PDF = one magazine page (originally made in Adobe InDesign CS3 and Adobe InCopy CS3). I have set up a list of companies I want to search for and, using PowerGREP and using delimited regular expressions, I am able to find most page numbers where a company is mentioned. However, where a company name contains two or more words, the search I am running will not pick up instances where the name appears over more than one line. For example, when looking for "CB Richard Ellis" and "Cushman & Wakefield", I got no result when the text appeared like this: DTZ beat BNP PRE, CB [line break here] Richard Ellis and Cushman & [line break here] Wakefield to secure the contract. [line end here] Could someone advise me on how to write a regular expression that will ignore white space between words and ignore line endings OR one that will look for the words including all types of white space (ie uneven spaces between words; spaces at the end of lines or line endings; and tabs (I am guessing that this info is imbedded somehow in PDF files). Here is a sample of the set of terms I have asked PowerGREP to search for: \bCB Richard Ellis\b \bCB Richard Ellis Hotels\b \bCentaur Services\b \bChapman Herbert\b \bCharities Property Fund\b \bChetwoods Architects\b \bChurch Commissioners\b \bClive Emson\b \bClothworkers’ Company\b \bColliers CRE\b \bCombined English Stores Group\b \bCommercial Estates Group\b \bConnells\b \bCooke & Powell\b \bCordea Savills\b \bCrown Estate\b \bCushman & Wakefield\b \bCWM Retail Property Advisors\b [Note that there is a delimited hard return between each \b at the end of each phrase and beginnong of the next phrase.] By the way, I am a production journalist and not usually involved in finding IT-type solutions and am finding it difficult to get to grips with the technical language on the PowerGREP site. Thanks for assistance Alison

    Read the article

  • ‘Empty’ results from MQL Query. Freebase Schema: /film/film/starring & /film/actor/film

    - by user1879631
    First post here, so I hope this is enough detail. I started using freebase-python today to get film information for a program that I’m working on. One thing that I need to grab is a list of actors that starred in a film. I’ve followed some tutorials and guides on the way to do this, and can get a list of films for a director or the director of a film, but when I try to do the same with an actor or a film’s cast, I get ‘null’ results. I have the same problem in both Python and the Freebase MQL Query Editor, and you can see what I've tried below. Links to all of the examples below written in the editor can be found here, as Stack Overflow wouldn't let me post links underneath each example on my first post! Working director query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/film', 'name':'Inception', 'directed_by':[]} fb(q) Working director's filmography query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/director', 'name': 'Christopher Nolan', 'film':[]} fb(q) Based on these tests, I tried to do the same with actors, but with odd results: Not working cast list query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/film', 'name':'Inception', 'starring':[]} fb(q) Not working actor's filmography query in Python: import freebase fb = freebase.mqlread q = {'type':'/film/actor', 'name':'Leonardo DiCaprio', 'film':[]} fb(q) Strangely, I get an accurate number of actors/films back, but no names. Does anyone have any idea what the problem might be? Thanks a lot, I'd appreciate any advice.

    Read the article

  • jackson failing to map empty array with No content to map to Object due to end of input

    - by ijabz
    I send a query to an api and map the json results to my classes using Jackson. When I get some results it works fine, but when there are no results it fails with java.io.EOFException: No content to map to Object due to end of input at org.codehaus.jackson.map.ObjectMapper._initForReading(ObjectMapper.java:2766) at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2709) at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1854) at com.jthink.discogs.query.DiscogsServerQuery.mapQuery(DiscogsServerQuery.java:382) at com.jthink.discogs.query.SearchQuery.mapQuery(SearchQuery.java:37)* But the thing is the api isn't returning nothing at all, so I dont see why it is failing. Here is the query: http://api.discogs.com/database/search?page=1&type=release&release_title=nude+and+rude+the+best+of+iggy+pop this is what I get back { "pagination": { "per_page": 50, "pages": 1, "page": 1, "urls": {}, "items": 0 }, "results": [] } and here is the top level object Im trying to map to public class Search { private Pagination pagination; private Result[] results; public Pagination getPagination() { return pagination; } public void setPagination(Pagination pagination) { this.pagination = pagination; } public Result[] getResults() { return results; } public void setResults(Result[] results) { this.results = results; } } Im guessing the problem is something to do with the results array being returned being blank, but cant see what Im doing wrong EDIT: The comment below was correct, although I usually receive { "pagination": { "per_page": 50, "pages": 1, "page": 1, "urls": {}, "items": 0 }, "results": [] } and in these cases there is no problem but sometimes I seem to just get an empty String. Now Im wondering if the problem is how I read from the inputstream if (responseCode == HttpURLConnection.HTTP_OK) InputStreamReader in= new InputStreamReader(uc.getInputStream()); BufferedReader br= new BufferedReader(in); while(br.ready()) { String next = br.readLine(); sb.append(next); } return sb.toString(); } although I dont read until I get the response code, is it possible that the first time I call br.ready() that I call it before it is ready, and therefore I don't read the input EDIT 2: Changing above code to simply String line; while ((line = br.readLine()) != null) { sb.append(line); } resolved the issue.

    Read the article

  • LINQ to SQL - How to efficiently do either an AND or an OR search for multiple criteria

    - by Dan Diplo
    I have an ASP.NET MVC site (which uses Linq To Sql for the ORM) and a situation where a client wants a search facility against a bespoke database whereby they can choose to either do an 'AND' search (all criteria match) or an 'OR' search (any criteria match). The query is quite complex and long and I want to know if there is a simple way I can make it do both without having to have create and maintain two different versions of the query. For instance, the current 'AND' search looks something like this (but this is a much simplified version): private IQueryable<SampleListDto> GetSampleSearchQuery(SamplesCriteria criteria) { var results = from r in Table where (r.Id == criteria.SampleId) && (r.Status.SampleStatusId == criteria.SampleStatusId) && (r.Job.JobNumber.StartsWith(criteria.JobNumber)) && (r.Description.Contains(criteria.Description)) select r; } I could copy this and replace the && with || operators to get the 'OR' version, but feel there must be a better way of achieving this. Does anybody have any suggestions how this can be achieved in an efficient and flexible way that is easy to maintain? Thanks.

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • Intelligent search and generation of Java code, preferrably using Python?

    - by Ipsquiggle
    Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this?

    Read the article

  • Why does my ping command (Windows) results alternate between "timeout" and "network is not reachable"?

    - by Sopalajo de Arrierez
    My Windows is in Spanish, so I will have to paste console outputs in that language (I think that translating without knowing the exact terms used in english versions could give worse results than leaving it as it appears on screen). This is the issue: when pinging a non-existent IP from a WinXP-SP3 machine (clean Windows install, just formatted), I get sometimes a "Timeout" result, and sometimes a "network is not reachable" message. This is the result of: ping 192.168.210.1 Haciendo ping a 192.168.210.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 192.168.210.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms 192.168.210.1 does not exist on the network. DHCP client is enabled, and the computer gets assigned those network config by the router. My IP: 192.168.11.2 Netmask: 255.255.255.0 Gateway: 192.168.11.1 DNS: 80.58.0.33/194.224.52.36 This is the output from "route print command": =========================================================================== Rutas activas: Destino de red Máscara de red Puerta de acceso Interfaz Métrica 0.0.0.0 0.0.0.0 192.168.11.1 192.168.11.2 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.11.0 255.255.255.0 192.168.11.2 192.168.11.2 20 192.168.11.2 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.11.255 255.255.255.255 192.168.11.2 192.168.11.2 20 224.0.0.0 240.0.0.0 192.168.11.2 192.168.11.2 20 255.255.255.255 255.255.255.255 192.168.11.2 192.168.11.2 1 255.255.255.255 255.255.255.255 192.168.11.2 3 1 Puerta de enlace predeterminada: 192.168.11.1 =========================================================================== Rutas persistentes: ninguno The output of: ping 1.1.1.1 Haciendo ping a 1.1.1.1 con 32 bytes de datos: Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Estadísticas de ping para 1.1.1.1: Paquetes: enviados = 4, recibidos = 0, perdidos = 4 1.1.1.1 does not exist on the network. and the output of: ping 10.1.1.1 Haciendo ping a 10.1.1.1 con 32 bytes de datos: Respuesta desde 80.58.67.86: Red de destino inaccesible. Tiempo de espera agotado para esta solicitud. Tiempo de espera agotado para esta solicitud. Respuesta desde 80.58.67.86: Red de destino inaccesible. Estadísticas de ping para 10.1.1.1: Paquetes: enviados = 4, recibidos = 2, perdidos = 2 (50% perdidos), 10.1.1.1 does not exist on the network. I can do some aproximate translation of what you demand if necessary. I have another computers in the same network (WinXP-SP3 and Win7-SP1), and they have, too, this problem. Gateway (Router): Buffalo WHR-HP-GN (official Buffalo firmware, not DD-WRT). I have some Linux (Debian/Kali) machine in my network, so I tested things on it: ping 192.168.210.1 PING 192.168.210.1 (192.168.210.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=1 Packet filtered From 80.58.67.86 icmp_seq=2 Packet filtered From 80.58.67.86 icmp_seq=3 Packet filtered From 80.58.67.86 icmp_seq=4 Packet filtered to the non-existing 1.1.1.1 : ping 1.1.1.1 PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data. ^C --- 1.1.1.1 ping statistics --- 153 packets transmitted, 0 received, 100% packet loss, time 153215ms (no response after waiting a few minutes). and the non-existing 10.1.1.1: ping 10.1.1.1 PING 10.1.1.1 (10.1.1.1) 56(84) bytes of data. From 80.58.67.86 icmp_seq=20 Packet filtered From 80.58.67.86 icmp_seq=22 Packet filtered From 80.58.67.86 icmp_seq=23 Packet filtered From 80.58.67.86 icmp_seq=24 Packet filtered From 80.58.67.86 icmp_seq=25 Packet filtered What is going on here? I am posing this question mainly for learning purposes, but there is another reason: when all pings are returning "timeout", it creates an %ERRORLEVEL% value of 1, but if there is someone of "Network is not reachable" type, %ERRORLEVEL% goes to 0 (no error), and this could be inappropriate for a shell script (we can not use ping to detect, for example, if the network is down due to loss of contact with the gateway).

    Read the article

  • JQuery Autocomplete Form Submission

    - by user1658370
    I have managed to implement the jQuery autocomplete plugin on my website but was wondering if it is possible to make the form auto-submit once the user selects an item from the search. I have the following set-up: HTML Form: <form class="quick_search" action="../include/search.php" method="post"> <input type="text" name="search" id="search" value="Search..."> </form> JavaScript: $().ready(function() { $("#search").autocomplete("../include/search.php", { width: 350, selectFirst: false }); }); I have also included the jQuery and Autoplugin scripts. The search.php file contains a list of the search options. At the moment, the search works correctly and I just need it to submit the form once an item is selected from the list that appears. I tried to use the onClick and onSelect options within the search field but neither of these worked correctly. Any help would be much appreciated (I don't really understand js)! Thanks.

    Read the article

  • Using the JPA Criteria API, can you do a fetch join that results in only one join?

    - by Shaun
    Using JPA 2.0. It seems that by default (no explicit fetch), @OneToOne(fetch = FetchType.EAGER) fields are fetched in 1 + N queries, where N is the number of results containing an Entity that defines the relationship to a distinct related entity. Using the Criteria API, I might try to avoid that as follows: CriteriaBuilder builder = entityManager.getCriteriaBuilder(); CriteriaQuery<MyEntity> query = builder.createQuery(MyEntity.class); Root<MyEntity> root = query.from(MyEntity.class); Join<MyEntity, RelatedEntity> join = root.join("relatedEntity"); root.fetch("relatedEntity"); query.select(root).where(builder.equals(join.get("id"), 3)); The above should ideally be equivalent to the following: SELECT m FROM MyEntity m JOIN FETCH myEntity.relatedEntity r WHERE r.id = 3 However, the criteria query results in the root table needlessly being joined to the related entity table twice; once for the fetch, and once for the where predicate. The resulting SQL looks something like this: SELECT myentity.id, myentity.attribute, relatedentity2.id, relatedentity2.attribute FROM my_entity myentity INNER JOIN related_entity relatedentity1 ON myentity.related_id = relatedentity1.id INNER JOIN related_entity relatedentity2 ON myentity.related_id = relatedentity2.id WHERE relatedentity1.id = 3 Alas, if I only do the fetch, then I don't have an expression to use in the where clause. Am I missing something, or is this a limitation of the Criteria API? If it's the latter, is this being remedied in JPA 2.1 or are there any vendor-specific enhancements? Otherwise, it seems better to just give up compile-time type checking (I realize my example doesn't use the metamodel) and use dynamic JPQL TypedQueries.

    Read the article

  • How can I use SQL Server's full text search across multiple rows at once?

    - by Morbo
    I'm trying to improve the search functionality on my web forums. I've got a table of posts, and each post has (among other less interesting things): PostID, a unique ID for the individual post. ThreadID, an ID of the thread the post belongs to. There can be any number of posts per thread. Text, because a forum would be really boring without it. I want to write an efficient query that will search the threads in the forum for a series of words, and it should return a hit for any ThreadID for which there are posts that include all of the search words. For example, let's say that thread 9 has post 1001 with the word "cat" in it, and also post 1027 with the word "hat" in it. I want a search for cat hat to return a hit for thread 9. This seems like a straightforward requirement, but I don't know of an efficient way to do it. Using the regular FREETEXT and CONTAINS capabilities for N'cat AND hat' won't return any hits in the above example because the words exist in different posts, even though those posts are in the same thread. (As far as I can tell, when using CREATE FULLTEXT INDEX I have to give it my index on the primary key PostID, and can't tell it to index all posts with the same ThreadID together.) The solution that I currently have in place works, but sucks: maintain a separate table that contains the entire concatenated post text of every thread, and make a full text index on THAT. I'm looking for a solution that doesn't require me to keep a duplicate copy of the entire text of every thread in my forums. Any ideas? Am I missing something obvious?

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Is there a Firefox or Chrome plugin, or a standalone program, for monitoring site usage and search queries?

    - by Leigh Caldwell
    I'm running some research on how people search the web for specific types of information. I'd like to be able to set them up with a laptop and browser and then record a history of what they search for and what sites they visit. A Firefox or Chrome plugin would be ideal, but a standalone program is fine too. It doesn't need to be free, just quick and reliable. It doesn't need to be a general PC monitoring program (though that would be OK too) - it's only Web usage I need to track. I've found a few on the Web but am not sure which ones to trust. Your recommendations would be much appreciated.

    Read the article

  • How do I disable Windows Search Indexing Service while on battery on Windows 7?

    - by Slaggg
    Since upgrading from Windows XP to Windows 7, I've found laptop battery life is reduced. One issue is that the Windows Search Indexing Service is constantly running, even when on battery power. Using Resource Monitor shows it is often the top consumer of CPU time and top producer of disk activity. I imagine this is one of the primary reasons the battery is getting drained faster. I cannot find any options to tell Windows Search Indexing Service to not index while on battery power. The only solution seems to be to stop the service, and also disable the service (if you just stop it, it will restart itself after a while). Doing this gives me considerably more battery life, but it's a pain because I have to re-enable the service when plugged in again. Is there a better solution?

    Read the article

  • how to search in contact list like messages application in iphone with adding selected item in front

    - by Hiren Gujarati
    I needs to make application like messages application in iphone. Just needs To:(search text) and below it there is contact list. If user type in search text then list automatically filtered & when user select particular row then it is visible in front of To: & then again user is able to search text like in messages application. You can see the following video http://www.apple.com/iphone/iphone-3gs/messages.html#video that shows how recipients are added one by one. Please suggest your way for this.. Thanks in advance.

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >