Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 264/911 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • 10 Reasons Why Java is the Top Embedded Platform

    - by Roger Brinkley
    With the release of Oracle ME Embedded 3.2 and Oracle Java Embedded Suite, Java is now ready to fully move into the embedded developer space, what many have called the "Internet of Things". Here are 10 reasons why Java is the top embedded platform. 1. Decouples software development from hardware development cycle Development is typically split between both hardware and software in a traditional design flow . This leads to complicated co-design and requires prototype hardware to be built. This parallel and interdependent hardware / software design process typically leads to two or more re-development phases. With Embedded Java, all specific work is carried out in software, with the (processor) hardware implementation fully decoupled. This with eliminate or at least reduces the need for re-spins of software or hardware and the original development efforts can be carried forward directly into product development and validation. 2. Development and testing can be done (mostly) using standard desktop systems through emulation Because the software and hardware are decoupled it now becomes easier to test the software long before it reaches the hardware through hardware emulation. Emulation is the ability of a program in an electronic device to imitate another program or device. In the past Java tools like the Java ME SDK and the SunSPOTs Solarium provided developers with emulation for a complete set of mobile telelphones and SunSpots. This often included network interaction or in the case of SunSPOTs radio communication. What emulation does is speed up the development cycle by refining the software development process without the need of hardware. The software is fixed, redefined, and refactored without the timely expense of hardware testing. With tools like the Java ME 3.2 SDK, Embedded Java applications can be be quickly developed on Windows based platforms. In the end of course developers should do a full set of testing on the hardware as incompatibilities between emulators and hardware will exist, but the amount of time to do this should be significantly reduced. 3. Highly productive language, APIs, runtime, and tools mean quick time to market Charles Nutter probably said it best in twitter blog when he tweeted, "Every time I see a piece of C code I need to port, my heart dies a little. Then I port it to 1/4 as much Java, and feel better." The Java environment is a very complex combination of a Java Virtual Machine, the Java Language, and it's robust APIs. Combine that with the Java ME SDK for small devices or just Netbeans for the larger devices and you have a development environment where development time is reduced significantly meaning the product can be shipped sooner. Of course this is assuming that the engineers don't get slap happy adding new features given the extra time they'll have.  4. Create high-performance, portable, secure, robust, cross-platform applications easily The latest JIT compilers for the Oracle JVM approach the speed of C/C++ code, and in some memory allocation intensive circumstances, exceed it. And specifically for the embedded devices both ME Embedded and SE Embedded have been optimized for the smaller footprints.  In portability Java uses Bytecode to make the language platform independent. This creates a write once run anywhere environment that allows you to develop on one platform and execute on others and avoids a platform vendor lock in. For security, Java achieves protection by confining a Java program to a Java execution environment and not allowing it to access other parts of computer.  In variety of systems the program must execute reliably to be robust. Finally, Oracle Java ME Embedded is a cross-industry and cross-platform product optimized in release version 3.2 for chipsets based on the ARM architectures. Similarly Oracle Java SE Embedded works on a variety of ARM V5, V6, and V7, X86 and Power Architecture Linux. 5. Java isolates your apps from language and platform variations (e.g. C/C++, kernel, libc differences) This has been a key factor in Java from day one. Developers write to Java and don't have to worry about underlying differences in the platform variations. Those platform variations are being managed by the JVM. Gone are the C/C++ problems like memory corruptions, stack overflows, and other such bugs which are extremely difficult to isolate. Of course this doesn't imply that you won't be able to get away from native code completely. There could be some situations where you have to write native code in either assembler or C/C++. But those instances should be limited. 6. Most popular embedded processors supported allowing design flexibility Java SE Embedded is now available on ARM V5, V6, and V7 along with Linux on X86 and Power Architecture platforms. Java ME Embedded is available on system based on ARM architecture SOCs with low memory footprints and a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN). 7. Support for key embedded features (low footprint, power mgmt., low latency, etc) All embedded devices by there very nature are constrained in some way. Economics may dictate a device with a less RAM and ROM. The CPU needs can dictate a less powerful device. Power consumption is another major resource in some embedded devices as connecting to consistent power source not always desirable or possible. For others they have to constantly on. Often many of these systems are headless (in the embedded space it's almost always Halloween).  For memory resources ,Java ME Embedded can run in environment as low as 130KB RAM/350KB ROM for a minimal, customized configuration up to 700KB RAM/1500KB ROM for the full, standard configuration. Java SE Embedded is designed for environments starting at 32MB RAM/39MB  ROM. Key functionality of embedded devices such as auto-start and recovery, flexible networking are fully supported. And while Java SE Embedded has been optimized for mid-range to high-end embedded systems, Java ME Embedded is a Java runtime stack optimized for small embedded systems. It provides a robust and flexible application platform with dedicated embedded functionality for always-on, headless (no graphics/UI), and connected devices. 8. Leverage huge Java developer ecosystem (expertise, existing code) There are over 9 million developers in world that work on Java, and while not all of them work on embedded systems, their wealth of expertise in developing applications is immense. In short, getting a java developer to work on a embedded system is pretty easy, you probably have a java developer living in your subdivsion.  Then of course there is the wealth of existing code. The Java Embedded Community on Java.net is central gathering place for embedded Java developers. Conferences like Embedded Java @ JavaOne and the a variety of hardware vendor conferences like Freescale Technlogy Forums offer an excellent opportunity for those interested in embedded systems. 9. Easily create end-to-end solutions integrated with Java back-end services In the "Internet of Things" things aren't on an island doing an single task. For instance and embedded drink dispenser doesn't just dispense a beverage, but could collect money from a credit card and also send information about current sales. Similarly, an embedded house power monitoring system doesn't just manage the power usage in a house, but can also send that data back to the power company. In both cases it isn't about the individual thing, but monitoring a collection of  things. How much power did your block, subdivsion, area of town, town, county, state, nation, world use? How many Dr Peppers were purchased from thing1, thing2, thingN? The point is that all this information can be collected and transferred securely  (and believe me that is key issue that Java fully supports) to back end services for further analysis. And what better back in service exists than a Java back in service. It's interesting to note that on larger embedded platforms that support the Java Embedded Suite some of the analysis might be done on the embedded device itself as JES has a glassfish server and Java Database as part of the installation. The result is an end to end Java solution. 10. Solutions from constrained devices to server-class systems Just take a look at some of the embedded Java systems that have already been developed and you'll see a vast range of solutions. Livescribe pen, Kindle, each and every Blu-Ray player, Cisco's Advanced VOIP phone, KronosInTouch smart time clock, EnergyICT smart metering, EDF's automated meter management, Ricoh Printers, and Stanford's automated car  are just a few of the list of embedded Java implementation that continues to grow. Conclusion Now if your a Java Developer you probably look at some of the 10 reasons and say "duh", but for the embedded developers this is should be an eye opening list. And with the release of ME Embedded 3.2 and the Java Embedded Suite the embedded developers life is now a whole lot easier. For the Java developer your employment opportunities are about to increase. For both it's a great time to start developing Java for the "Internet of Things".

    Read the article

  • TFS API Add Favorites programmatically

    - by Tarun Arora
    01 – What are we trying to achieve? In this blog post I’ll be showing you how to add work item queries as favorites, it is also possible to use the same technique to add build definition as favorites. Once a shared query or build definition has been added as favorite it will show up on the team web access.  In this blog post I’ll be showing you a work around in the absence of a proper API how you can add queries to team favorites. 02 – Disclaimer There is no official API for adding favorites programmatically. In the work around below I am using the Identity service to store this data in a property bag which is used during display of favorites on the team web site. This uses an internal data structure that could change over time, there is no guarantee about the key names or content of the values. What is shown below is a workaround for a missing API. 03 – Concept There is no direct API support for favorites, but you could work around it using the identity service in TFS.  Favorites are stored in the property bag associated with the TeamFoundationIdentity (either the ‘team’ identity or the users identity depending on if these are ‘team’ or ‘my’ favorites).  The data is stored as json in the property bag of the identity, the key being prefixed by ‘Microsoft.TeamFoundation.Framework.Server.IdentityFavorites’. References - Microsoft.TeamFoundation.WorkItemTracking.Client - using Microsoft.TeamFoundation.Client; - using Microsoft.TeamFoundation.Framework.Client; - using Microsoft.TeamFoundation.Framework.Common; - using Microsoft.TeamFoundation.ProcessConfiguration.Client; - using Microsoft.TeamFoundation.Server; - using Microsoft.TeamFoundation.WorkItemTracking.Client; Services - IIdentityManagementService2 - TfsTeamService - WorkItemStore 04 – Solution Lets start by connecting to TFS programmatically // Create an instance of the services to be used during the program private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; private static TfsTeamService _tts; private static TeamSettingsConfigurationService _teamConfig; private static IIdentityManagementService2 _ids; // Connect to TFS programmatically public static bool ConnectToTfs() { var isSelected = false; var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } Lets get all the work item queries from the selected team project static readonly Dictionary<string, string> QueryAndGuid = new Dictionary<string, string>(); // Get all queries and query guid in the selected team project private static void GetQueryGuidList(IEnumerable<QueryItem> query) { foreach (QueryItem subQuery in query) { if (subQuery.GetType() == typeof(QueryFolder)) GetQueryGuidList((QueryFolder)subQuery); else { QueryAndGuid.Add(subQuery.Name, subQuery.Id.ToString()); } } }   Pass the name of a valid Team in your team project and a name of a valid query in your team project. The team details will be extracted using the team name and query GUID will be extracted using the query name. These details will be used to construct the key and value that will be passed to the SetProperty method in the Identity service.           Key           “Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..<TeamProjectURI>.<TeamId>.WorkItemTracking.Queries.<newGuid1>”           Value           "{"data":"<QueryGuid>","id":"<NewGuid1>","name":"<QueryKey>","type":"Microsoft.TeamFoundation.WorkItemTracking.QueryItem”}"           // Configure a Work Item Query for the given team private static void ConfigureTeamFavorites(string teamName, string queryName) { _ids = _tfs.GetService<IIdentityManagementService2>(); var g = Guid.NewGuid(); var guid = string.Empty; var teamDetail = _tts.QueryTeams(_selectedTeamProject.Uri).FirstOrDefault(t => t.Name == teamName); foreach (var q in QueryAndGuid.Where(q => q.Key == queryName)) { guid = q.Value; } if(guid == string.Empty) { Console.WriteLine("Query '{0}' - Not found!", queryName); return; } var key = string.Format( "Microsoft.TeamFoundation.Framework.Server.IdentityFavorites..{0}.{1}.WorkItemTracking.Queries{2}", new Uri(_selectedTeamProject.Uri).Segments.LastOrDefault(), teamDetail.Identity.TeamFoundationId, g); var value = string.Format( @"{0}""data"":""{1}"",""id"":""{2}"",""name"":""{3}"",""type"":""Microsoft.TeamFoundation.WorkItemTracking.QueryItem""{4}", "{", guid, g, QueryAndGuid.FirstOrDefault(q => q.Value==guid).Key, "}"); teamDetail.Identity.SetProperty(IdentityPropertyScope.Local, key, value); _ids.UpdateExtendedProperties(teamDetail.Identity); Console.WriteLine("{0}Added Query '{1}' as Favorite", Environment.NewLine, queryName); }   If you have any questions or suggestions leave a comment. Enjoy!

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • Enter comments on queries in TraceTune

    - by Bill Graziano
    I’m trying to make TraceTune (and eventually ClearTrace) work the way I do.  My typical query tuning session goes like this: Run a trace and upload to TraceTune/ClearTrace Tune the slowest queries Goto 1 I might do this two or three times in one day and then not come back to it again for weeks or even months.  This is especially true for those clients that I only visit a few times per month.  In many cases I’ll look at a query, decide I can’t do much with it and move on.  I needed a way to capture that information. TraceTune now lets you enter a comment for a query.  It can be as simple or as complex as you like.  The comment will be shown inline with the execution history of that query. This should let you walk back through your history with a query and decide whether you should spend more time tuning it.

    Read the article

  • BizTalk 2009 - Messages: Last 100 Sent

    - by StuartBrierley
    Having previously talked about the lack of the traditional HAT in BizTalk 2009, the question then becomes how do you replicate some of the functionality that was previsouly relied on? I have already covered the Last 100 Messages Received query so what about sent messages? In BizTalk 2004 we had a query in HAT to return the messages sent in the last day.  While not a direct replacement the following query replicates some of the usefullness of this query in a BizTalk 2009 Hatless environment. Basically we are creating a query to search for the last one hundred tracked messages that were sent by BizTalk: Coming up Messages - last 50 suspended Service instances - last 100

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Sort tables with two elements

    - by Mercer
    Hello i want to make a sort in java. In my object i have many element so i want to make this sort with power and model public class Product implements Comparable<Product>,Serializable { private int idProduct ; private int power; private String model; private String color; [...] @Override public int compareTo(Product o) { return String.valueOf(this.power).compareTo(String.valueOf(o.power)); } so how to make sort with power and model

    Read the article

  • hibernate createSQL how to cache it?

    - by cometta
    If i use Criteria statement, i able to use cache easily. but when i using custom CREATESQL, like below, how do i cache it? Query query = getHibernateTemplate().getSessionFactory().getCurrentSession().createSQLQuery( " select company_keysupplier.ddiv as ddiv, company_division.name as DIVISION, company_keysupplier.ddep as ddep,company_department.name as DEPARTMENT"+ " from company_keysupplier "+ " left join company_division on "+ " (company_keysupplier.ddiv= company_division.division_code and company_keysupplier.survey_num=company_division.survey_num) "+ " left join company_department on "+ " (company_keysupplier.ddep= company_department.department_code and company_keysupplier.survey_num=company_department.survey_num) "+ " where company_keysupplier.sdiv = :sdiv and company_keysupplier.sdep = :sdep and company_keysupplier.survey_num= :surveyNum " + " order by company_division.name, company_keysupplier.ddep " ) .addScalar("ddiv") .addScalar("ddep") .addScalar("DIVISION") .addScalar("DEPARTMENT") .setResultTransformer(Transformers.aliasToBean(IssKeysupplier.class)); query.setString("sdiv", sdiv); query.setString("sdep", sdepartment); query.setBigInteger("surveyNum", survey_num); //i tried query.setCachable(true) fail...... List result= (List<IssKeysupplier>)query.list(); return result;

    Read the article

  • How to Sum calulated fields

    - by Nazero Jerry
    I‘d like to ask I question that here that I think would be easy to some people. Ok I have query that return records of two related tables. (One to many) In this query I have about 3 to 4 calculated fields that are based on the fields from the 2 tables. Now I want to have a group by clause for names and sum clause to sum the calculated fields but it ends up in error message saying: “You tried to execute a query that is not part of aggregate function” So I decided to just run the query without the totals *(ie no group by , sum etc,,,) : And then I created another query that totals my previous query. ( i.e. using group by clause for names and sum for calculated fields… no calculation here) This is fine ( I use to do this) but I don’t like having two queries just to get summary total. Is their any other way of doing this in the design view and create only one query?. I would very much appreciate. Thankyou: JM

    Read the article

  • how to return the current object?

    - by ajsie
    in code igniter you can type: $query = $this->db->query("YOUR QUERY"); foreach ($query->result() as $row) { echo $row->title; echo $row->name; echo $row->body; } i guess that the query method returns the object it's part of. am i correct? if i am, how do you type the line where it returns the object? so what i wonder is how it looks like inside the query method for the above code to be functional. public function query($sql) { // some db logic here with the $sql and saves the values to the properties (title, name and body) return X } with other words, what should X be?

    Read the article

  • SQL vs MySQL: Rules about aggregate operations and GROUP BY

    - by Phazyck
    In this book I'm currently reading while following a course on databases, the following example of an illegal query using an aggregate operator is given: Find the name and age of the oldest sailor. Consider the following attempt to answer this query: SELECT S.name, S.age FROM Sailors.S The intent is for this query to return not only the maximum age but also the name of the sailors having that age. However, this query is illegal in SQL--if the SELECT clause uses an aggregate operation, then it must use only aggregate operations unless the query contains a GROUP BY clause! Some time later while doing an exercise using MySQL, I faced a similar problem, and made a mistake similar to the one mentioned. However, MySQL didn't complain and just spit out some tables which later turned out not be what I needed. Is the query above really illegal in SQL, but legal in MySQL, and if so, why is that? In what situation would one need to make such a query?

    Read the article

  • passing multiple queries to view with codeigniter

    - by LvS
    I am trying to build a forum with Codeigniter. So far i have the forums themselves displayed and the threads displayed, based on the creating dynamic news tutorial. But that is 2 different pages, i need to obviously display them into one page, like this: Forum 1 - thread 1 - thread 2 - thread 3 Forum 2 - thread 1 - thread 2 etc. And then the next step is obviously to display all the posts in a thread. Most likely with some pagination going on. But that is for later. For now i have the forum controller (slimmed version): <?php class Forum extends CI_Controller { public function __construct() { parent::__construct(); $this->load->model('forum_model'); $this->lang->load('forum'); $this->lang->load('dutch'); } public function index() { $data['forums'] = $this->forum_model->get_forums(); $data['title'] = $this->lang->line('title'); $data['view'] = $this->lang->line('view'); $this->load->view('templates/header', $data); $this->load->view('forum/index', $data); $this->load->view('templates/footer'); } public function view($slug) { $data['forum_item'] = $this->forum_model->get_forums($slug); if (empty($data['forum_item'])) { show_404(); } $data['title'] = $data['forum_item']['title']; $this->load->view('templates/header', $data); $this->load->view('forum/view', $data); $this->load->view('templates/footer'); } } ?> And the forum_model (also slimmed down) <?php class Forum_model extends CI_Model { public function __construct() { $this->load->database(); } public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } public function get_threads($forumid, $limit, $offset) { $query = $this->db->get_where('thread', array('forumid', $forumid), $limit, $offset); return $query->result_array(); } } ?> And the view file <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <p><a href="forum/<?php echo $forum_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> Now that last one, i would like to have something like this: <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <?php foreach ($threads as $thread_item): ?> <h2><?php echo $thread_item['title'] ?></h2> <p><a href="thread/<?php echo $thread_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> <?php endforeach ?> But the question is, how do i get the model to return like a double query to the view, so that it contains both the forums and the threads within each forum. I tried to make a foreach loop in the get_forum function, but when i do this: public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); foreach ($query->row_array() as $forum_item) { $thread_query=$this->get_threads($forum_item->forumid, 50, 0); } return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } i get the error A PHP Error was encountered Severity: Notice Message: Trying to get property of non-object Filename: models/forum_model.php Line Number: 16 I hope anyone has some good tips, thanks! Lenny *EDIT*** Thanks for the feedback. I have been puzzling and this seems to work now :) $query= $this->db->get('forum'); foreach ($query->result() as $forum_item) { $forum[$forum_item->forumid]['title']=$forum_item->title; $thread_query=$this->db->get_where('thread', array('forumid' => $forum_item->forumid), 20, 0); foreach ($thread_query->result() as $thread_item) { $forum[$forum_item->forumid]['thread'][]=$thread_item->title; } } return $forum; } What is now next, is how to display this multidimensional array in the view, with foreach statements.... Any suggestions ? Thanks

    Read the article

  • How cast in XML for aggregate functions

    - by renegm
    In SQL Server 2008. I need execute a query like that: DECLARE @x AS xml SET @x=N'<r><c>First Text</c></r><r><c>Other Text</c></r>' SELECT @x.query('fn:max(r/c)') But return nothing (apparently because convert xdt:untypedAtomic to numeric) How to "cast" r/c to varchar? Something like SELECT @x.query('fn:max(«CAST(r/c «AS varchar(20))»)') Edit: Using Nodes the function MAX is from T-SQL no fn:max function In this code: DECLARE @x xml; SET @x = ''; SELECT @x.query('fn:max((1, 2))'); SELECT @x.query('fn:max(("First Text", "Other Text"))'); both query return expected: 2 and "Other Text" fn:max can evaluate string expression ad hoc. But the first query dont work. How to force string arguments to fn:max?

    Read the article

  • Help with Linq and Generics

    - by Jonathan
    Hi to all. I'm triying to make a function that add a 'where' clause to a query based in a property and a value. This is a very simplefied version of my function. Private Function simplified(ByVal query As IQueryable(Of T), ByVal PValue As Long, ByVal p As PropertyInfo) As ObjectQuery(Of T) query = query.Where(Function(c) DirectCast(p.GetValue(c, Nothing), Long) = PValue) Dim t = query.ToList 'this line is only for testing, and here is the error raise Return query End Function The error message is: LINQ to Entities does not recognize the method 'System.Object CompareObjectEqual(System.Object, System.Object, Boolean)' method, and this method cannot be translated into a store expression. Looks like a can't use GetValue inside a linq query. Can I achieve this in other way? Post your answer in C#/VB. Chose the one that make you feel more confortable. Thanks

    Read the article

  • Zend Framework Db Select Join table help

    - by tester2001
    I have this query: SELECT g.title, g.asin, g.platform_id, r.rank FROM games g INNER JOIN ranks r ON ( g.id = r.game_id ) ORDER BY r.rank DESC LIMIT 5` Now, this is my JOIN using Zend_Db_Select but it gives me array error $query = $this-select(); $query-from(array('g' = 'games'), array()); $query-join(array('r' = 'ranks'), 'g.id = r.game_id', array('g.title', 'g.asin', 'g.platform_id', 'r.rank')); $query-order('r.rank DESC'); $query-limit($top); $resultRows = $this-fetchAll($query); return $resultRows; Anyone know what I could be doing wrong? I want to get all the columns in 'games' to show and the 'rank' column in the ranks table.

    Read the article

  • Can SPSiteDataQuery search both List and Libraries?

    - by Rich Bennema
    I have the following code: SPSiteDataQuery query = new SPSiteDataQuery(); query.ViewFields = "<FieldRef Name=\"UniqueId\" />"; query.Webs = "<Webs Scope=\"SiteCollection\" />"; query.Query = "<Where<Eq><FieldRef Name='MyCustomField' /><Value Type='Boolean'>1</Value></Eq></Where>"; query.Lists = "<Lists BaseType=\"1\" />"; DataTable results = site.RootWeb.GetSiteData(query); This searches all the Document Libraries in the site collection, but I want to search all the Lists as well. Is there a way to set the Lists property to search both at the same time?

    Read the article

  • Union - Same table, excluding previous results MySQL

    - by user82302124
    I'm trying to write a query that will: Run a query, give me (x) number of rows (limit 4) If that query didn't give me the 4 I need, run a second query limit 4-(x) and exclude the ids from the first query A third query that acts like the second I have this: (SELECT *, 1 as SORY_QUERY1 FROM xbamZ where state = 'Minnesota' and industry = 'Miscellaneous' and id != '229' limit 4) UNION (SELECT *, 2 FROM xbamZ where state = 'Minnesota' limit 2) UNION (SELECT *, 3 FROM xbamZ where industry = 'Miscellaneous' limit 1) How (or is?) do I do that? Am I close? This query gives me duplicates

    Read the article

  • I having a problem with the mysqli free() member function

    - by neo skosana
    Hi I have code where I connected to the database like so: $db = new mysqli("localhost", "user", "pass", "company"); Now when I query the database like so: //query calls to a stored procedure 'user_info' $result = $db->query("CALL user_info('$instruc', 'c_register', '$eml', '$pass', '')"); //I use the $result This query works well. Now when I try and free that result like so: $result->free(); or $result->close(); It seems like it doesn't do anything because $result is still set. When I try to run another query it gives me this error: Fatal error: Call to a member function fetch_array() on a non-object in... For me to run this other query I have to close the db conection and connect again, then it will work. I want to know if there is a way I could run the other query without having to disconnect and reconnect to the database. thanks in advance.

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • How to resolve: 'cmd' is not recognized as an internal or external command?

    - by qwer1234
    I have searched other forums to solve this error where it would either end with: 1.) re-install OS 2.) Setting path variable C:/Windows/System32 The latter did not work, and as you can probably imagine, I do not want to have to re-install my OS... I am running the command "mvn jetty:run" and the following is my stack trace, finishing with the message: "'cmd' is not recognized as an internal or external command, operable problem or batch file" as stated in the title of this question. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building Test Tool [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run [WARNING] Removing: run from forked lifecycle, to prevent recursive invocation. [INFO] [resources:resources] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 32 resources [INFO] Copying 192 resources [INFO] [compiler:compile] [INFO] Compiling 1854 source files to C:\Development\global_stock_record\test\java\Turtle\target\classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[45,29] cannot find symbol symbol : class CompilerEnvirons location: package org.mozilla.javascript C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[47,29] cannot find symbol symbol : class ContextFactory location: package org.mozilla.javascript C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[49,39] cannot find symbol symbol : class ClassCompiler location: package org.mozilla.javascript.optimizer C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[181,55] cannot find symbol symbol : class CompilerEnvirons location: class net.sf.jasperreports.compilers.JavaScriptClassCompiler C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\export\JRXmlExporter.java:[99,26] package org.w3c.tools.codec does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[26,34] package org.apache.commons.digester does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[27,34] package org.apache.commons.digester does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[34,47] cannot find symbol symbol: class ObjectCreationFactory public abstract class JRBaseFactory implements ObjectCreationFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[41,21] cannot find symbol symbol : class Digester location: class net.sf.jasperreports.engine.xml.JRBaseFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[47,8] cannot find symbol symbol : class Digester location: class net.sf.jasperreports.engine.xml.JRBaseFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[56,25] cannot find symbol symbol : class Digester location: class net.sf.jasperreports.engine.xml.JRBaseFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Code39Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\BarcodeComponent.java:[41,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Code39Component.java:[66,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.Code39Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\BarcodeComponent.java:[179,29] cannot find symbol symbol : class HumanReadablePlacement location: class net.sf.jasperreports.components.barcode4j.BarcodeComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN128Component.java:[26,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\DataMatrixComponent.java:[26,45] package org.krysalis.barcode4j.impl.datamatrix does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\FourStateBarcodeComponent.java:[26,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCAComponent.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCEComponent.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN13Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN8Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Interleaved2Of5Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN128Component.java:[57,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.EAN128Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\DataMatrixComponent.java:[62,22] cannot find symbol symbol : class SymbolShapeHint location: class net.sf.jasperreports.components.barcode4j.DataMatrixComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\FourStateBarcodeComponent.java:[76,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.FourStateBarcodeComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCAComponent.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.UPCAComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCEComponent.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.UPCEComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN13Component.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.EAN13Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN8Component.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.EAN8Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Interleaved2Of5Component.java:[60,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.Interleaved2Of5Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRHibernateAbstractDataSource.java:[36,25] package org.hibernate.type does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[49,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[50,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[51,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[52,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[53,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[54,25] package org.hibernate.type does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRHibernateAbstractDataSource.java:[173,38] cannot find symbol symbol : class Type location: class net.sf.jasperreports.engine.data.JRHibernateAbstractDataSource C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[66,35] cannot find symbol symbol : class Type location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[89,9] cannot find symbol symbol : class Session location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[90,9] cannot find symbol symbol : class Query location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[92,9] cannot find symbol symbol : class ScrollableResults location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[359,8] cannot find symbol symbol : class Type location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[474,8] cannot find symbol symbol : class ScrollableResults location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barbecue\BarbecueFillComponent.java:[40,31] package net.sourceforge.barbecue does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[38,27] package org.apache.tools.ant does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[39,27] package org.apache.tools.ant does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[40,27] package org.apache.tools.ant does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[41,33] package org.apache.tools.ant.types does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[42,33] package org.apache.tools.ant.types does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[43,43] package org.apache.tools.ant.types.resources does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[44,32] package org.apache.tools.ant.util does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[45,32] package org.apache.tools.ant.util does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRBaseAntTask.java:[34,36] package org.apache.tools.ant.taskdefs does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRBaseAntTask.java:[41,35] cannot find symbol symbol: class MatchingTask public class JRBaseAntTask extends MatchingTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[74,9] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[76,9] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[86,23] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[104,8] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[131,8] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[145,30] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[183,41] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[211,33] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[276,32] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\TransformedPropertyRule.java:[27,34] package org.apache.commons.digester does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\TransformedPropertyRule.java:[37,54] cannot find symbol symbol: class Rule public abstract class TransformedPropertyRule extends Rule C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[29,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[30,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[31,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[45,9] cannot find symbol symbol : class Connection location: class net.sf.jasperreports.data.mondrian.MondrianDataAdapterService C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[40,10] package jxl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[41,10] package jxl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[42,10] package jxl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[43,20] package jxl.read.biff does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[66,9] cannot find symbol symbol : class Workbook location: class net.sf.jasperreports.engine.data.JRXlsDataSource C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[83,24] cannot find symbol symbol : class Workbook location: class net.sf.jasperreports.engine.data.JRXlsDataSource C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\xmla\JRXmlaMember.java:[26,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\result\JROlapMember.java:[26,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\xmla\JRXmlaMember.java:[89,8] cannot find symbol symbol : class Member location: class net.sf.jasperreports.olap.xmla.JRXmlaMember C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\result\JROlapMember.java:[46,1] cannot find symbol symbol : class Member location: interface net.sf.jasperreports.olap.result.JROlapMember C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\web\actions\AbstractAction.java:[43,36] package org.codehaus.jackson.annotate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\web\actions\AbstractAction.java:[49,1] cannot find symbol symbol: class JsonTypeInfo @JsonTypeInfo(use=JsonTypeInfo.Id.NAME, include=JsonTypeInfo.As.PROPERTY, property="actionName") C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[32,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[33,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[34,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[35,34] package org.krysalis.barcode4j.impl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[36,42] package org.krysalis.barcode4j.impl.codabar does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[37,42] package org.krysalis.barcode4j.impl.code128 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[38,42] package org.krysalis.barcode4j.impl.code128 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[39,41] package org.krysalis.barcode4j.impl.code39 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[40,45] package org.krysalis.barcode4j.impl.datamatrix does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[41,45] package org.krysalis.barcode4j.impl.datamatrix does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[42,44] package org.krysalis.barcode4j.impl.fourstate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[43,44] package org.krysalis.barcode4j.impl.fourstate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[44,44] package org.krysalis.barcode4j.impl.fourstate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[45,42] package org.krysalis.barcode4j.impl.int2of5 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[46,41] package org.krysalis.barcode4j.impl.pdf417 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[47,42] package org.krysalis.barcode4j.impl.postnet does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[48,41] package org.krysalis.barcode4j.impl.upcean does not exist [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 seconds [INFO] Finished at: Fri Dec 07 11:46:28 EST 2012 [INFO] Final Memory: 27M/63M [INFO] ------------------------------------------------------------------------

    Read the article

  • SNIReadSync executing between 120-500 ms for a simple query. What do I look for?

    - by Mike
    Hi Stackoverflow, I am executing a simple query against SQL Server 2005: protected static void InitConnection(IDbCommand cmd) { cmd.CommandText = "set transaction isolation level read uncommitted "; cmd.ExecuteNonQuery(); } Whenever I profile with dotTrace 3.1, it claims that SNIReadSync method is taking between 100 - 500 ms. What sort of things do I need to be looking for in order to get this time down? Thanks!

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >