Search Results

Search found 37260 results on 1491 pages for 'command query responsibil'.

Page 416/1491 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • Twitter ?? Nashorn ????(??)

    - by Homma
    ???? Nashorn ? Java ??????? Twitter ???????????????????? JavaFX ??????????????? ????? ??? jlaskey ??? Nashorn Blog ????????????? https://blogs.oracle.com/nashorn/entry/nashorn_in_the_twitterverse_continued ???????? ?? Twitter ???????????????????????? JavaFX ??????????????????????????????? Nashorn ?? JavaFX ??????????????JavaFX ???????????????????????????????????????Nashorn ? Java ????????????????????????????????????(JavaFX ?????????????????????)? ?????????????????????????????????????????????? Twitter ????????????????????????? var twitter4j = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query = twitter4j.Query; function getTrendingData() { var twitter = new TwitterFactory().instance; var query = new Query("nashorn OR nashornjs"); query.since("2012-11-21"); query.count = 100; var data = {}; do { var result = twitter.search(query); var tweets = result.tweets; for each (var tweet in tweets) { var date = tweet.createdAt; var key = (1900 + date.year) + "/" + (1 + date.month) + "/" + date.date; data[key] = (data[key] || 0) + 1; } } while (query = result.nextQuery()); return data; } ??????????????????getTrendingData() ??????????????(??????????Nashorn ???????? OpenJDK ?????? 2012 ? 11 ? 21 ???)??????????????????????????????????? ????JavaFX ? BarChart ??????????? var javafx = Packages.javafx; var Stage = javafx.stage.Stage var Scene = javafx.scene.Scene; var Group = javafx.scene.Group; var Chart = javafx.scene.chart.Chart; var FXCollections = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis = javafx.scene.chart.CategoryAxis; var NumberAxis = javafx.scene.chart.NumberAxis; var BarChart = javafx.scene.chart.BarChart; var XYChart = javafx.scene.chart.XYChart; var Series = javafx.scene.chart.XYChart.Series; var Data = javafx.scene.chart.XYChart.Data; function graph(stage, data) { var root = new Group(); stage.scene = new Scene(root); var dates = Object.keys(data); var xAxis = new CategoryAxis(); xAxis.categories = FXCollections.observableArrayList(dates); var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0); var series = FXCollections.observableArrayList(); for (var date in data) { series.add(new Data(date, data[date])); } var tweets = new Series("Tweets", series); var barChartData = FXCollections.observableArrayList(tweets); var chart = new BarChart(xAxis, yAxis, barChartData, 25.0); root.children.add(chart); } ????????????????????????????????stage.scene = new Scene(root) ? stage.setScene(new Scene(root)) ????????????????????Nashorn ? stage ??????? scene ???????????????????(Dynalink ?????????)Java Beans ???????????????? (setScene()) ???????????????????????????????Nashorn ? FXCollections ??????????????????????????????observableArrayList(dates) ??????????Nashorn ? JavaScript ??? (dates) ? Java ???????????????????????????? JavaScript ?????????????????? Java ????????????????????????????????????????????????????????????? ????????????????????????????????? JavaFX ???????????????????????? JavaFX ??????????????javafx.application.Application ??????????????????????????? JavaFX ????????????????????????????????????????????????? import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } ???? Nashorn ??????? JSR-223 ? javax.script ????????? private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); ????????? JavaScript ???????? Nashorn ???????????????????? load ???????????????????????engine ???????????????load ????????????? ???????????????Java ???????????????????????????????????????????????????? Java ????????????????JavaFX ???????? start ????? stop ?????????????????????????????????????? public interface Trending { public void start(Stage stage) throws Exception; public void stop() throws Exception; } ?????????????????????????????? function newTrending() { return new Packages.Trending() { start: function(stage) { var data = getTrendingData(); graph(stage, data); stage.show(); }, stop: function() { } } } newTrending(); ?????? Trending ?????????????????????start ????? stop ??????????????????????????????????? eval ???? Java ??????????????? trending = (Trending) load("Trending.js"); ????????????????Trending.js ??????? getTrendingData ???????????? newTrending ????????????????????? Java ?????????newTrending ????????? eval ????????? Trending ????????????????????????????????????????????????????????? trending.start(stage); ???????? ???? Nashorn ????????? http://www.myexpospace.com/JavaOne2012/SessionFiles/CON5251_PDF_5251_0001.pdf ???????? Dynalink ??????? https://github.com/szegedi/dynalink ????????

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • SQL WHERE clause not returning rows when field has NULL value

    - by JohnB
    Ok, so I'm aware of this issue: When SET ANSI_NULLS is ON, all comparisons against a null value evaluate to UNKNOWN SQL And NULL Values in where clause SQL Server return Rows that are not equal < to a value and NULL However, I am trying to query a DataTable. I could add to my query: OR col_1 IS NULL OR col_2 IS NULL for every column, but my table has 47 columns, and I'm building dynamic SQL (string concatenation), and it just seems like a pain to do that. Is there another solution? I was to bring back all the rows that have NULL values in the WHERE comparison. UPDATE Example of query that gave me problems: string query = col_1 not in ('Dorothy', 'Blanche') and col_2 not in ('Zborna', 'Devereaux') grid.DataContext = dataTable.Select(query).CopyToDataTable(); (didn't retrieve rows if/when col_1 = null and/or col_2 = null)

    Read the article

  • Management and Monitoring Tools for Windows Azure

    - by BuckWoody
    With such a large platform, Windows Azure has a lot of moving parts. We’ve done our best to keep the interface as simple as possible, while giving you the most control and visibility we can. However, as with most Microsoft products, there are multiple ways to do something – and I’ve always found that to be a good strength. Depending on the situation, I might want a graphical interface, a command-line interface, or just an API so I can incorporate the management into my own tools, or have third-party companies write other tools. While by no means exhaustive, I thought I might put together a quick list of a few tools you can use to manage and monitor Windows Azure components, from our IaaS, SaaS and PaaS offerings. Some of the products focus on one area more than another, but all are available today. I’ll try and maintain this list to keep it current, but make sure you check the date of this post’s update – if it’s more than six months old, it’s most likely out of date. Things move fast in the cloud. The Windows Azure Management Portal The primary tool for managing Windows Azure is our portal – most everything you need is there, from creating new services to querying a database. There are two versions as of this writing – a Silverlight client version, and a newer HTML5 version. The latter is being updated constantly to be in parity with the Silverlight client. There’s a balance in this portal between simplicity and power – we’re following the “less is more” approach, with increasing levels of detail as you work through the portal rather than overwhelming you with a single, long “more is more” page. You can find the Portal here: http://windowsazure.com (then click “Log In” and then “Portal”) Windows Azure Management API You can also use programming tools to either write your own interface, or simply provide management functions directly within your solution. You have two options – you can use the more universal REST API’s, which area bit more complex but work with any system that can write to them, or the more approachable .NET API calls in code. You can find the reference for the API’s here: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx  All Class Libraries, for each part of Windows Azure: http://msdn.microsoft.com/en-us/library/ee393295.aspx  PowerShell Command-lets PowerShell is one of the most powerful scripting languages I’ve used with Windows – and it’s baked into all of our products. When you need to work with multiple servers, scripting is really the only way to go, and the Windows Azure PowerShell Command-Lets allow you to work across most any part of the platform – and can even be used within the services themselves. You can do everything with them from creating a new IaaS, PaaS or SaaS service, to controlling them and even working with security and more. You can find more about the Command-Lets here: http://wappowershell.codeplex.com/documentation (older link, still works, will point you to the new ones as well) We have command-line utilities for other operating systems as well: https://www.windowsazure.com/en-us/manage/downloads/  Video walkthrough of using the Command-Lets: http://channel9.msdn.com/Events/BUILD/BUILD2011/SAC-859T  System Center System Center is actually a suite of graphical tools you can use to manage, deploy, control, monitor and tune software from Microsoft and even other platforms. This will be the primary tool we’ll recommend for managing a hybrid or contiguous management process – and as time goes on you’ll see more and more features put into System Center for the entire Windows Azure suite of products. You can find the Management Pack and README for it here: http://www.microsoft.com/en-us/download/details.aspx?id=11324  SQL Server Management Studio / Data Tools / Visual Studio SQL Server has two built-in management and development, and since Version 2008 R2, you can use them to manage Windows Azure Databases. Visual Studio also lets you connect to and manage portions of Windows Azure as well as Windows Azure Databases. You can read more about Visual Studio here: http://msdn.microsoft.com/en-us/library/windowsazure/ee405484  You can read more about the SQL tools here: http://msdn.microsoft.com/en-us/library/windowsazure/ee621784.aspx  Vendor-Provided Tools Microsoft does not suggest or endorse a specific third-party product. We do, however, use them, and see lots of other customers use them. You can browse to these sites to learn more, and chat with their folks directly on how they support Windows Azure. Cerebrata: Tools for managing from the command-line, graphical diagnostics, graphical storage management - http://www.cerebrata.com/  Quest Cloud Tools: Monitoring, Storage Management, and costing tools - http://communities.quest.com/community/cloud-tools  Paraleap: Monitoring tool - http://www.paraleap.com/AzureWatch  Cloudgraphs: Monitoring too -  http://www.cloudgraphs.com/  Opstera: Monitoring for Windows Azure and a Scale-out pattern manager - http://www.opstera.com/products/Azureops/  Compuware: SaaS performance monitoring, load testing -  http://www.compuware.com/application-performance-management/gomez-apm-products.html  SOASTA: Penetration and Security Testing - http://www.soasta.com/cloudtest/enterprise/  LoadStorm: Load-testing tool - http://loadstorm.com/windows-azure  Open-Source Tools This is probably the most specific set of tools, and the list I’ll have to maintain most often. Smaller projects have a way of coming and going, so I’ll try and make sure this list is current. Windows Azure MMC: (I actually use this one a lot) http://wapmmc.codeplex.com/  Windows Azure Diagnostics Monitor: http://archive.msdn.microsoft.com/wazdmon  Azure Application Monitor: http://azuremonitor.codeplex.com/  Azure Web Log: http://www.xentrik.net/software/azure_web_log.html  Cloud Ninja:Multi-Tennant billing and performance monitor -  http://cnmb.codeplex.com/  Cloud Samurai: Multi-Tennant Management- http://cloudsamurai.codeplex.com/    If you have additions to this list, please post them as a comment and I’ll research and then add them. Thanks!

    Read the article

  • DELETE and EDIT is not working in my python program

    - by user2968025
    This is a simple python program that ADD, DELETE, EDIT and VIEW student records. The problem is, DELETE and EDIT is not working. I dont know why but when I tried removing one '?' in the DELETE dunction, I had the error that says there are only 8 columns and it needs 10. But originally, there are only 9 columns. I don't know where it got the other one to make it 10. Please help.. :( import sys import sqlite3 import tkinter import tkinter as tk from tkinter import * from tkinter.ttk import * def newRecord(): studentnum="" name="" age="" birthday="" address="" email="" course="" year="" section="" con=sqlite3.connect("Students.db") cur=con.cursor() cur.execute("CREATE TABLE IF NOT EXISTS student(studentnum TEXT, name TEXT, age TEXT, birthday TEXT, address TEXT, email TEXT, course TEXT, year TEXT, section TEXT)") def save(): studentnum=en1.get() name=en2.get() age=en3.get() birthday=en4.get() address=en5.get() email=en6.get() course=en7.get() year=en8.get() section=en9.get() student=(studentnum,name,age,birthday,address,email,course,year,section) cur.execute("INSERT INTO student(studentnum,name,age,birthday,address,email,course,year,section) VALUES(?,?,?,?,?,?,?,?,?)",student) con.commit() win=tkinter.Tk();win.title("Students") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Add Record") lbl.pack() lbl1=tkinter.Label(win,width=30,text="Student Number : ") lbl1.pack() en1=tkinter.Entry(win,width=30) en1.pack() lbl2=tkinter.Label(win,width=30,text="Name : ") lbl2.pack() en2=tkinter.Entry(win,width=30) en2.pack() lbl3=tkinter.Label(win,width=30,text="Age : ") lbl3.pack() en3=tkinter.Entry(win,width=30) en3.pack() lbl4=tkinter.Label(win,width=30,text="Birthday : ") lbl4.pack() en4=tkinter.Entry(win,width=30) en4.pack() lbl5=tkinter.Label(win,width=30,text="Address : ") lbl5.pack() en5=tkinter.Entry(win,width=30) en5.pack() lbl6=tkinter.Label(win,width=30,text="Email : ") lbl6.pack() en6=tkinter.Entry(win,width=30) en6.pack() lbl7=tkinter.Label(win,width=30,text="Course : ") lbl7.pack() en7=tkinter.Entry(win,width=30) en7.pack() lbl8=tkinter.Label(win,width=30,text="Year : ") lbl8.pack() en8=tkinter.Entry(win,width=30) en8.pack() lbl9=tkinter.Label(win,width=30,text="Section : ") lbl9.pack() en9=tkinter.Entry(win,width=30) en9.pack() btn1=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Save Student",command=save) btn1.pack() def editRecord(): studentnum1="" def edit(): studentnum1=en10.get() studentnum="" name="" age="" birthday="" address="" email="" course="" year="" section="" con=sqlite3.connect("Students.db") cur=con.cursor() row=cur.fetchone() cur.execute("DELETE FROM student WHERE name = '%s'" % studentnum1) con.commit() def save(): studentnum=en1.get() name=en2.get() age=en3.get() birthday=en4.get() address=en5.get() email=en6.get() course=en7.get() year=en8.get() section=en8.get() student=(studentnum,name,age,email,birthday,address,email,course,year,section) cur.execute("INSERT INTO student(studentnum,name,age,email,birthday,address,email,course,year,section) VALUES(?,?,?,?,?,?,?,?,?)",student) con.commit() win=tkinter.Tk();win.title("Students") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Edit Reocrd :"+'\t'+studentnum1) lbl.pack() lbl1=tkinter.Label(win,width=30,text="Student Number : ") lbl1.pack() en1=tkinter.Entry(win,width=30) en1.pack() lbl2=tkinter.Label(win,width=30,text="Name : ") lbl2.pack() en2=tkinter.Entry(win,width=30) en2.pack() lbl3=tkinter.Label(win,width=30,text="Age : ") lbl3.pack() en3=tkinter.Entry(win,width=30) en3.pack() lbl4=tkinter.Label(win,width=30,text="Birthday : ") lbl4.pack() en4=tkinter.Entry(win,width=30) en4.pack() lbl5=tkinter.Label(win,width=30,text="Address : ") lbl5.pack() en5=tkinter.Entry(win,width=30) en5.pack() lbl6=tkinter.Label(win,width=30,text="Email : ") lbl6.pack() en6=tkinter.Entry(win,width=30) en6.pack() lbl7=tkinter.Label(win,width=30,text="Course : ") lbl7.pack() en7=tkinter.Entry(win,width=30) en7.pack() lbl8=tkinter.Label(win,width=30,text="Year : ") lbl8.pack() en8=tkinter.Entry(win,width=30) en8.pack() lbl9=tkinter.Label(win,width=30,text="Section : ") lbl9.pack() en9=tkinter.Entry(win,width=30) en9.pack() btn1=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Save Record",command=save) btn1.pack() win=tkinter.Tk();win.title("Edit Student") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Edit Record") lbl.pack() lbl10=tkinter.Label(win,width=30,text="Student Number : ") lbl10.pack() en10=tkinter.Entry(win) en10.pack() btn2=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Edit",command=edit) btn2.pack() def deleteRecord(): studentnum1="" win=tkinter.Tk();win.title("Delete Student Record") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Delete Record") lbl.pack() lbl10=tkinter.Label(win,text="Student Number") lbl10.pack() en10=tkinter.Entry(win) en10.pack() def delete(): studentnum1=en10.get() con=sqlite3.connect("Students.db") cur=con.cursor() row=cur.fetchone() cur.execute("DELETE FROM student WHERE name = '%s';" % studentnum1) con.commit() win=tkinter.Tk();win.title("Record Deleted") lbl=tkinter.Label(win,background="#000",foreground="#ddd",width=30,text="Record Deleted :") lbl.pack() lbl=tkinter.Label(win,width=30,text=studentnum1) lbl.pack() btn=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Ok",command=win.destroy) btn.pack() btn2=tkinter.Button(win,background="#000",foreground="#ddd",width=30,text="Delete",command=delete) btn2.pack() def viewRecord(): con=sqlite3.connect("Students.db") cur=con.cursor() win=tkinter.Tk();win.title("View Student Record"); row=cur.fetchall() lbl1=tkinter.Label(win,background="#000",foreground="#ddd",width=300,text="\n\tStudent Number"+"\t\tName"+"\t\tAge"+"\t\tBirthday"+"\t\tAddress"+"\t\tEmail"+"\t\tCourse"+"\t\tYear"+"\t\nSection") lbl1.pack() for row in cur.execute("SELECT * FROM student"): lbl2=tkinter.Label(win,width=300,text= row[0] + '\t\t' + row[1] + '\t' + row[2] + '\t\t' + row[3] + '\t\t' + row[4] + '\t\t' + row[5] + '\t\t' + row[6] + '\t\t' + row[7] + '\t\t' + row[8] + '\n') lbl2.pack() con.close() but1=tkinter.Button(win,background="#000",foreground="#fff", width=150,text="Close",command=win.destroy) but1.pack() root=tkinter.Tk();root.title("Student Records") menubar=tkinter.Menu(root) manage=tkinter.Menu(menubar,tearoff=0) manage.add_command(label='New Record',command=newRecord) manage.add_command(label='Edit Record',command=editRecord) manage.add_command(label='Delete Record',command=deleteRecord) menubar.add_cascade(label='Manage',menu=manage) view=tkinter.Menu(menubar,tearoff=0) view.add_command(label='View Record',command=viewRecord) menubar.add_cascade(label='View',menu=view) root.config(menu=menubar) lbl=tkinter.Label(root,background="#000",foreground="#ddd",font=("Verdana",15),width=30,text="Student Records") lbl.pack() lbl1=tkinter.Label(root,text="\nSubmitted by :") lbl1.pack() lbl2=tkinter.Label(root,text="Chavez, Vissia Nicole P") lbl2.pack() lbl3=tkinter.Label(root,text="BSIT 4-4") lbl3.pack()

    Read the article

  • Co-Authors Wordpress Plugin: coauthors_wp_list_authors function not working correctly

    - by rayne
    The Co-Authors Plus Plugin for Wordpress has a very annoying bug. The custom function coauthors_wp_list_authors lists authors the same way the wordpress function wp_list_authors does, but it does not include authors in the list who don't have a post of their own - if they have only entries in which they are listed as co-author but not as author, they will not be included in the list. That is of course missing a very important point. I've looked at the faulty SQL statement, but unfortunately my knowledge of advanced SQL, especially when it comes to JOINs, as well as my knowledge of the wp database structure is too limited and I remain clueless. There is a topic in the WP support forum, but unfortunately the information there is very outdated and the fix is not applicable anymore. I couldn't find any other, more current solutions on the internet. I'd be glad if somewhere here could help fix the SQL statement so it also lists co-authors who don't have posts where they're the sole author, as well as display the correct post count for all authors. Here's the entire function for reference purposes with a comment marking the SQL statement: function coauthors_wp_list_authors($args = '') { global $wpdb, $coauthors_plus; $defaults = array( 'optioncount' => false, 'exclude_admin' => true, 'show_fullname' => false, 'hide_empty' => true, 'feed' => '', 'feed_image' => '', 'feed_type' => '', 'echo' => true, 'style' => 'list', 'html' => true ); $r = wp_parse_args( $args, $defaults ); extract($r, EXTR_SKIP); $return = ''; $authors = $wpdb->get_results("SELECT ID, user_nicename from $wpdb->users " . ($exclude_admin ? "WHERE user_login <> 'admin' " : '') . "ORDER BY display_name"); $author_count = array(); # this is the SQL statement which doesn't work correctly: $query = "SELECT DISTINCT $wpdb->users.ID AS post_author, $wpdb->terms.name AS user_name, $wpdb->term_taxonomy.count AS count"; $query .= " FROM $wpdb->posts"; $query .= " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id)"; $query .= " INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; $query .= " INNER JOIN $wpdb->terms ON ($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id)"; $query .= " INNER JOIN $wpdb->users ON ($wpdb->terms.name = $wpdb->users.user_login)"; $query .= " WHERE post_type = 'post' AND " . get_private_posts_cap_sql( 'post' ); $query .= " AND $wpdb->term_taxonomy.taxonomy = '$coauthors_plus->coauthor_taxonomy'"; $query .= " GROUP BY post_author"; foreach ((array) $wpdb->get_results($query) as $row) { $author_count[$row->post_author] = $row->count; } foreach ( (array) $authors as $author ) { $link = ''; $author = get_userdata( $author->ID ); $posts = (isset($author_count[$author->ID])) ? $author_count[$author->ID] : 0; $name = $author->display_name; if ( $show_fullname && ($author->first_name != '' && $author->last_name != '') ) $name = "$author->first_name $author->last_name"; if( !$html ) { if ( $posts == 0 ) { if ( ! $hide_empty ) $return .= $name . ', '; } else $return .= $name . ', '; continue; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= '<li>'; if ( $posts == 0 ) { if ( ! $hide_empty ) $link = $name; } else { $link = '<a href="' . get_author_posts_url($author->ID, $author->user_nicename) . '" title="' . esc_attr( sprintf(__("Posts by %s", 'co-authors-plus'), $author->display_name) ) . '">' . $name . '</a>'; if ( (! empty($feed_image)) || (! empty($feed)) ) { $link .= ' '; if (empty($feed_image)) $link .= '('; $link .= '<a href="' . get_author_feed_link($author->ID) . '"'; if ( !empty($feed) ) { $title = ' title="' . esc_attr($feed) . '"'; $alt = ' alt="' . esc_attr($feed) . '"'; $name = $feed; $link .= $title; } $link .= '>'; if ( !empty($feed_image) ) $link .= "<img src=\"" . esc_url($feed_image) . "\" style=\"border: none;\"$alt$title" . ' />'; else $link .= $name; $link .= '</a>'; if ( empty($feed_image) ) $link .= ')'; } if ( $optioncount ) $link .= ' ('. $posts . ')'; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= $link . '</li>'; else if ( ! $hide_empty ) $return .= $link . ', '; } $return = trim($return, ', '); if ( ! $echo ) return $return; echo $return; }

    Read the article

  • Compare two dates with JPA

    - by Kiva
    Hello everybody, I need to compare two dates in a JPQL query but it doesn't work. Here is my query: Query query = em.createQuery("SELECT h FROM PositionHistoric h, SeoDate d WHERE h.primaryKey.siteDb = :site AND h.primaryKey.engineDb = :engine AND h.primaryKey.keywordDb = :keyword AND h.date = d AND d.date <= :date ORDER BY h.date DESC"); My parameter date is a java.util.Date My query return a objects list but the dates are upper and lower to my parameter. Someone kown how to do this ? Thanks.

    Read the article

  • Global name not defined

    - by anteater7171
    I wrote a CPU monitoring program in Python. For some reason sometimes the the program will run without any problem. Then other times the program won't even start because of the following error. Traceback (most recent call last): File "", line 244, in run_nodebug File "C:\Python26\CPUR1.7.pyw", line 601, in app = simpleapp_tk(None) File "C:\Python26\CPUR1.7.pyw", line 26, in init self.initialize() File "C:\Python26\CPUR1.7.pyw", line 107, in initialize self.F() File "C:\Python26\CPUR1.7.pyw", line 517, in F S2 = TL.entryVariableS.get() NameError: global name 'TL' is not defined I can't seem to find the problem, maybe someone more experienced may assist me? Here is a snippet of the part giving me trouble: (The second to last line in the snippet is what's giving me trouble) def E(self): if self.selectedM.get() =='Options...': Setup global TL TL = Tkinter.Toplevel(self) menu = Tkinter.Menu(TL) TL.config(menu=menu) filemenu = Tkinter.Menu(menu) menu.add_cascade(label="| Menu |", menu=filemenu) filemenu.add_command(label="Instruction Manual...", command=self.helpmenu) filemenu.add_command(label="About...", command=self.aboutmenu) filemenu.add_separator() filemenu.add_command(label="Exit Options", command=TL.destroy) filemenu.add_command(label="Exit", command=self.destroy) helpmenu = Tkinter.Menu(menu) menu.add_cascade(label="| Help |", menu=helpmenu) helpmenu.add_command(label="Instruction Manual...", command=self.helpmenu) helpmenu.add_separator() helpmenu.add_command(label="Quick Help...", command=self.helpmenu) Title TL.label5 = Tkinter.Label(TL,text="CPU Usage: Options",anchor="center",fg="black",bg="lightgreen",relief="ridge",borderwidth=5,font=('Arial', 18, 'bold')) TL.label5.pack(padx=15,ipadx=5) X Y scale TL.separator = Tkinter.Frame(TL,height=7, bd=1, relief='ridge', bg='grey95') TL.separator.pack(pady=5,padx=5) # TL.sclX = Tkinter.Scale(TL.separator, from_=0, to=1500, orient='horizontal', resolution=1, command=self.A) TL.sclX.grid(column=1,row=0,ipadx=27, sticky='w') TL.label1 = Tkinter.Label(TL.separator,text="X",anchor="s",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label1.grid(column=0,row=0, pady=1, sticky='S') TL.sclY = Tkinter.Scale(TL.separator, from_=0, to=1500, resolution=1, command=self.A) TL.sclY.grid(column=2,row=1,rowspan=2,sticky='e', padx=4) TL.label3 = Tkinter.Label(TL.separator,text="Y",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label3.grid(column=2,row=0, padx=10, sticky='e') TL.entryVariable2 = Tkinter.StringVar() TL.entry2 = Tkinter.Entry(TL.separator,textvariable=TL.entryVariable2, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entry2.grid(column=1,row=1,ipadx=20, pady=10,sticky='EW') TL.entry2.bind("<Return>", self.B) TL.label2 = Tkinter.Label(TL.separator,text="X:",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label2.grid(column=0,row=1, ipadx=4, sticky='W') TL.entryVariable1 = Tkinter.StringVar() TL.entry1 = Tkinter.Entry(TL.separator,textvariable=TL.entryVariable1, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entry1.grid(column=1,row=2,sticky='EW') TL.entry1.bind("<Return>", self.B) TL.label4 = Tkinter.Label(TL.separator,text="Y:", anchor="center",fg="black",bg="grey95",font=('Arial', 8 ,'bold')) TL.label4.grid(column=0,row=2, ipadx=4, sticky='W') TL.label7 = Tkinter.Label(TL.separator,text="Text Colour:",fg="black",bg="grey95",font=('Arial', 8 ,'bold'),justify='left') TL.label7.grid(column=1,row=3, sticky='W',padx=10,ipady=10,ipadx=30) TL.selectedP = Tkinter.StringVar() TL.opt1 = Tkinter.OptionMenu(TL.separator, TL.selectedP,'Normal', 'White','Black', 'Blue', 'Steel Blue','Green','Light Green','Yellow','Orange' ,'Red',command=self.G) TL.opt1.config(fg="black",bg="grey90",activebackground="grey90",activeforeground="black", anchor="center",relief="raised",direction='right',font=('Arial', 10)) TL.opt1.grid(column=1,row=4,sticky='EW',padx=20,ipadx=20) TL.selectedP.set('Normal') TL.sclS = Tkinter.Scale(TL.separator, from_=10, to=2000, orient='horizontal', resolution=10, command=self.H) TL.sclS.grid(column=1,row=5,ipadx=27, sticky='w') TL.sclS.set(600) TL.entryVariableS = Tkinter.StringVar() TL.entryS = Tkinter.Entry(TL.separator,textvariable=TL.entryVariableS, fg="grey15",bg="grey90",relief="sunken",insertbackground="black",borderwidth=5,font=('Arial', 10)) TL.entryS.grid(column=1,row=6,ipadx=20, pady=10,sticky='EW') TL.entryS.bind("<Return>", self.I) TL.entryVariableS.set(600) # TL.resizable(False,False) TL.title('Options') geomPatt = re.compile(r"(\d+)?x?(\d+)?([+-])(\d+)([+-])(\d+)") s = self.wm_geometry() m = geomPatt.search(s) X = m.group(4) Y = m.group(6) TL.sclY.set(Y) TL.sclX.set(X) if self.selectedM.get() == 'Exit': self.destroy() def F (self): G = round(psutil.cpu_percent(), 1) G1 = str(G) + '%' self.labelVariable.set(G1) if G < 5: self.imageLabel.configure(image=self.image0) if G >= 5: self.imageLabel.configure(image=self.image5) if G >= 10: self.imageLabel.configure(image=self.image10) if G >= 15: self.imageLabel.configure(image=self.image15) if G >= 20: self.imageLabel.configure(image=self.image20) if G >= 25: self.imageLabel.configure(image=self.image25) if G >= 30: self.imageLabel.configure(image=self.image30) if G >= 35: self.imageLabel.configure(image=self.image35) if G >= 40: self.imageLabel.configure(image=self.image40) if G >= 45: self.imageLabel.configure(image=self.image45) if G >= 50: self.imageLabel.configure(image=self.image50) if G >= 55: self.imageLabel.configure(image=self.image55) if G >= 60: self.imageLabel.configure(image=self.image60) if G >= 65: self.imageLabel.configure(image=self.image65) if G >= 70: self.imageLabel.configure(image=self.image70) if G >= 75: self.imageLabel.configure(image=self.image75) if G >= 80: self.imageLabel.configure(image=self.image80) if G >= 85: self.imageLabel.configure(image=self.image85) if G >= 90: self.imageLabel.configure(image=self.image90) if 100> G >= 95: self.imageLabel.configure(image=self.image95) if G == 100: self.imageLabel.configure(image=self.image100) S2 = TL.entryVariableS.get() self.after(int(S2), self.F)

    Read the article

  • How can I use Lucene for personal name (first name, last name) search?

    - by os111
    I'm writing a search feature for a database of NFL players. The user enters a search string like "Jason Campbell" or "Campbell" or "Jason". I'm having trouble getting the appropriate results. Which Analyzer should I use when indexing? Which Query when querying? Should I distinguish between first name and last name or just index the full name string? I'd like the following behavior: Query: "Jason Campbell" - Result: exact match for 1 player, Jason Campbell Query: "Campbell" - Result: all players with Campbell in their name Query: "Jason" - Result: all players with Jason in their name Query: "Cambel" [misspelled] - Result: all players with Campbell in their name

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • Filtering DBNull With LINQ

    - by Steven
    Why does the following query raise the error below for a row with a NULL value for barrel when I explicitly filter out those rows in the Where clause? Dim query = From row As dbDataSet.conformalRow In dbDataSet.Tables("conformal") _ Where Not IsDBNull(row.Cal) AndAlso tiCal_drop.Text = row.Cal _ AndAlso Not IsDBNull(row.Tran) AndAlso tiTrans_drop.Text = row.Tran _ AndAlso Not IsDBNull(row.barrel) _ Select row.barrel If query.Count() > 0 Then tiBarrel_txt.Text = query(0) Run-time exception thrown : System.Data.StrongTypingException - The value for column 'barrel' in table 'conformal' is DBNull. How should my query / condition be rewritten to work as I intended?

    Read the article

  • Entity Framework, full-text search and temporary tables

    - by markus
    I have a LINQ-2-Entity query builder, nesting different kinds of Where clauses depending on a fairly complex search form. Works great so far. Now I need to use a SQL Server fulltext search index in some of my queries. Is there any chance to add the search term directly to the LINQ query, and have the score available as a selectable property? If not, I could write a stored procedure to load a list of all row IDs matching the full-text search criteria, and then use a LINQ-2-Entity query to load the detail data and evaluate other optional filter criteria in a loop per row. That would be of course a very bad idea performance-wise. Another option would be to use a stored procedure to insert all row IDs matching the full-text search into a temporary table, and then let the LINQ query join the temporary table. Question is: how to join a temporary table in a LINQ query, as it cannot be part of the entity model?

    Read the article

  • Powershell 2.0 Hang When Run From MsDeploy pre- post- ops using c/

    - by SonOfNun
    I am trying to invoke powershell during the preSync call in a MSDeploy command, but powershell does not exit the process after it has been called. The command (from command line): "tools/MSDeploy/msdeploy.exe" -verb:sync -preSync:runCommand="powershell.exe -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command C:/MyInstallPath/deploy.ps1 Set-WebAppOffline Uninstall-Service ",waitInterval=60000 -usechecksum -source:dirPath="build/for-deployment" -dest:wmsvc=BLUEPRINT-X86,username=deployer,password=deployer,dirPath=C:/MyInstallPath I used a hack here (http://therightstuff.de/2010/02/06/How-We-Practice-Continuous-Integration-And-Deployment-With-MSDeploy.aspx) that gets the powershell process and kills it but that didn't work. I also tried taskkill and the sysinternals equivalent, but nothing will kill the process so that MSDeploy errors out. The command is executed, but then just sits there. Any ideas what might be causing powershell to hang like this? I have found a few other similar issues around the web but no answers.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Eclispe RCP SWT menus for Windows and Mac OS

    - by Raven
    Hi, how do I configure an Eclipse RCP command style menu to match the different menu structures on Windows and on Mac OS? Mac OS X menu example http://images.apple.com/macosx/refinements/images/services_menu_20090902.jpg Windows menu example http://www.flamingpear.com/images/psp8menu.gif In the example you see, the differences in the menu structures. For example has the Mac in its application menu the preference command, the about command and the exit command. These are under Windows usally in the file menu and the about command is found in the help menu. Is there a "standard" way of doing it with RCP programs? It should somehow be possible because Eclipse itself does it. But I can not figure out how.

    Read the article

  • SQLCMD Mode: give it one more chance

    - by Maria Zakourdaev
      - Click on me. Choose me. - asked one forgotten feature when some bored DBA was purposelessly wondering through the Management Studio menu at the end of her long and busy working day. - Why would I use you? I have heard of no one who does. What are you for? - perplexedly wondered aged and wise DBA. At least that DBA thought she was aged and wise though each day tried to prove to her that she wasn't. - I know you. You are quite lazy. Why would you do additional clicks to move from window to window? From Tool to tool ? This is irritating, isn't it? I can run windows system commands, sql statements and much more from the same script, from the same query window! - I have all my tools that I‘m used to, I have Management Studio, Cmd, Powershell. They can do anything for me. I don’t need additional tools. - I promise you, you will like me. – the thing continued to whine . - All right, show me. – she gave up. It’s always this way, she thought sadly, - easier to agree than to explain why you don’t want. - Enable me and then think about anything that you always couldn’t do through the management studio and had to use other tools. - Ok. Google for me the list of greatest features of SQL SERVER 2012. - Well... I’m not sure... Think about something else. - Ok, here is something easy for you. I want to check if file folder exists or if file is there. Though, I can easily do this using xp_cmdshell … - This is easy for me. – rejoiced the feature. By the way, having the items of the menu talking to you usually means you should stop working and go home. Or drink coffee. Or both. Well, aged and wise dba wasn’t thinking about the weirdness of the situation at that moment. - After enabling me, – said unfairly forgotten feature (it was thinking of itself in such manner) – after enabling me you can use all command line commands in the same management studio query window by adding two exclamation marks !! at the beginning of the script line to denote that you want to use cmd command: -Just keep in mind that when using this feature, you are actually running the commands ON YOUR computer and not on SQL server that query window is connected to. This is main difference from using xp_cmdshell which is executing commands on sql server itself. Bottomline, use UNC path instead of local path. - Look, there are much more than that. - The SQLCMD feature was getting exited.- You can get IP of your servers, create, rename and drop folders. You can see the contents of any file anywhere and even start different tools from the same query window: Not so aged and wise DBA was getting interested: - I also want to run different scripts on different servers without changing connection of the query window. - Sure, sure! Another great feature that CMDmode is providing us with and giving more power to querying. Use “:” to use additional features, like :connect that allows you to change connection: - Now imagine, you have one script where you have all your changes, like creating staging table on the DWH staging server, adding fact table to DWH itself and updating stored procedures in the server where reporting database is located. - Now, give me more challenges! - Script out a list of stored procedures into the text files. - You can do it easily by using command :out which will write the query results into the specified text file. The output can be the code of the stored procedure or any data. Actually this is the same as changing the query output into the file instead of the grid. - Now, take all of the scripts and run all of them, one by one, on the different server.  - Easily - Come on... I’m sure that you can not... -Why not? Naturally, I can do it using :r commant which is opening a script and executing it. Look, I can also use :setvar command to define an environment variable in SQLCMD mode. Just note that you have to leave the empty string between :r commands, otherwise it’s not working although I have no idea why. - Wow.- She was really impressed. - Ok, I’ll go to try all those… -Wait, wait! I know how to google the SQL SERVER features for you! This example will open chrome explorer with search results for the “SQL server 2012 top features” ( change the path to suit your PC): “Well, this can be probably useful stuff, maybe this feature is really unfairly forgotten”, thought the DBA while going through the dark empty parking lot to her lonely car. “As someone really wise once said: “It is what we think we know that keeps us from learning. Learn, unlearn and relearn”.

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • iBatis get executed sql

    - by qaxi
    Hi all. Is there any way where I can get the executed query of iBatis? I want to reuse the query for an UNION query. For example: <sqlMap namespace="userSQLMap"> <select id="getUser" resultClass="UserPackage.User"> SELECT username, password FROM table WHERE id=#value# </select> </sqlMap> And when I execute the query through int id = 1 List<User> userList = queryDAO.executeForObjectList("userSQLMap.getUser",id) I want to get SELECT username, password FROM table WHERE id=1 Is there any way I could get the query? Thanks.

    Read the article

  • SQLDependency thread

    - by user171523
    i am in the process implementing SQLdepenency i would like to know in case of Dependency Handler exeuctues will it spun a different thred from main Process ? What will happen when the event handler triggers? Do i need to worry about any multithreds issues? public void CreateSqlDependency() { try { using (SqlConnection connection = (SqlConnection)DBFactory.GetDBFactoryConnection(Constants.SQL_PROVIDER_NAME)) { SqlCommand command = (SqlCommand)DBFactory.GetCommand(Constants.SQL_PROVIDER_NAME); command.CommandText = watchQuery; command.CommandType = CommandType.Text; SqlDependency dependency = new SqlDependency(command); //Create the callback object dependency.OnChange += new OnChangeEventHandler(this.QueueChangeNotificationHandler); SqlDependency.Start(connectionString); DataTable dataTable = DBFactory.ExecuteSPReDT(command); } } catch (SqlException sqlExp) { throw sqlExp; } catch (Exception ex) { throw ex; } } public void QueueChangeNotificationHandler(object caller, SqlNotificationEventArgs e) { if(e.Info == SqlNotificationInfo.Insert) Fire(); }

    Read the article

  • nHibernate Criteria API Projections

    - by Craig
    I have an entity that is like this public class Customer { public Customer() { Addresses = new List<Address>(); } public int CustomerId { get; set; } public string Name { get; set; } public IList<Address> Addresses { get; set; } } And I am trying to query it using the Criteria API like this. ICriteria query = m_CustomerRepository.Query() .CreateAlias("Address", "a", NHibernate.SqlCommand.JoinType.LeftOuterJoin); var result = query .SetProjection(Projections.Distinct( Projections.ProjectionList() .Add(Projections.Alias(Projections.Property("CustomerId"), "CustomerId")) .Add(Projections.Alias(Projections.Property("Name"), "Name")) .Add(Projections.Alias(Projections.Property("Addresses"), "Addresses")) )) .SetResultTransformer(new AliasToBeanResultTransformer(typeof(Customer))) .List<Customer>() as List<Customer>; When I run this query the Addresses property of the Customer object is null. Is there anyway to add a projection for this List property?

    Read the article

  • How to send a list of object from my MainPage.xaml to another page

    - by LivingThing
    When navigating to another page how can i make my list of object available to another page. for example in my mainpage.xaml var data2 = from query in document.Descendants("weather") select new Forecast { date = (string)query.Element("date"), tempMaxC = (string)query.Element("tempMaxC"), tempMinC = (string)query.Element("tempMinC"), weatherIconUrl = (string)query.Element("weatherIconUrl"), }; forecasts = data2.ToList<Forecast>(); .... NavigationService.Navigate(new Uri("/WeatherInfoPage.xaml", UriKind.Relative)); and then in my other class, i want to make it available so that i can use it like this private void AddPageItem(List<Forecast> forecasts) { .. }

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >