Search Results

Search found 21942 results on 878 pages for 'named query'.

Page 275/878 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • listing objects from ManyToManyField

    - by Noam Smadja
    i am trying to print a list of all the Conferences and for each conference, print its 3 Speakers. in my template i have: {% if conferences %} <ul> {% for conference in conferences %} <li>{{ conference.date }}</li> {% for speakers in conference.speakers %} <li>{{ conference.speakers }}</li> {% endfor %} {% endfor %} </ul> {% else %} <p>No Conferences</p> {% endif %} in my views.py file i have: from django.shortcuts import render_to_response from youthconf.conference.models import Conference def manageconf(request): conferences = Conference.objects.all().order_by('-date')[:5] return render_to_response('conference/manageconf.html', {'conferences': conferences}) there is a model named conference. which has a class named Conferences with a ManyToManyField named speakers i get the error: Caught an exception while rendering: 'ManyRelatedManager' object is not iterable with this line: {% for speakers in conference.speakers %}

    Read the article

  • How to synchronize two DataBase Schemas Oracle 10G?

    - by gnash-85
    Hi Masters, Good Day, I am new and very naive to Oracle DB. I am using Oracle 10G. Let me explain to you. I have one source database named ( DB1) and Target Database named (DB2). I have 2 schema's named dbs1 and dbs2 in the source database (DB1). I have exported both the database schemas in Source Database (DB1) and imported it successfully into the Target Database (DB2). Now I face a challenge in synchronizing these database schemas every time from Source DB (DB1) to Target DB (DB2). Can anyone please help in letting me know how can achieve this synchronization? It would a great help. Thanks Nash

    Read the article

  • Inner join 2 tables one to many 2 where clauses

    - by user2892350
    I'm a relative rookie at this,so please bear with me... I have 2 tables: OrderDetail and OrderMaster...both have a column named SalesOrder. OrderDetail table has multiple rows per unique SalesOrder. OrderMaster table has one row per unique SalesOrder. OrderDetail has a column named LineType. OrderMaster has a column named OrderStatus. I want to select all records from OrderDetail that have a LineType of "1" AND whose matching SalesOrder line in the OrderMaster table has a OrderStatus column value of "4". In plain English, orders with a Status 4 are open and ready to ship, LineType value of 1 means the Detail Line is a product code. How should this query be structured? It's going into VS 2008 (VB). Many thanks in advance!!! Mike

    Read the article

  • Java: How to get Unicode name of a character (or its type category)?

    - by java.is.for.desktop
    Hello, everyone! The Character class in Java defines methods which check a given char argument for equality with certain Unicode chars or for belonging to some type category. These chars and type categories are named. As stated in given javadoc, examples for named chars are HORIZONTAL TABULATION, FORM FEED, ...; example for named type categories are SPACE_SEPARATOR, PARAGRAPH_SEPARATOR, ... However, being byte or int values instead of enums, the name of these types are "hidden" at runtime. So, is there a possibility to get characters' and/or type categories' names at runtime?

    Read the article

  • Dynamically generated controls issue.

    - by Thomas
    I have a form with a panel docked in it. I then dynamically create 15 panels (named: panel_n) and 15 pictureboxes (named: picturebox_n) on the primary panel (named ContainerPanel). When dragging the any picturebox over a panel (panel_n) created using the relevant mouse events. I would like to get the panel's name that the picture box was dragged over. The mouse cursor seems to be captured. I have tried creating a IMessageFilter interface, but there are still no events that trigger when dragging one of the pictureboxes over any one of the panels. The ClientRectangle.IntersectsWith function also does not work as the co-ords are always 0,0. All I need is the panel name where the picturebox was dragged over (preferably on the mouseup event)

    Read the article

  • 2 DataBase Schema's syncronization in Oracle 10G

    - by gnash-85
    Hi Masters, Good Day, I am new very naive to oracle DB. I am using Oracle 10G. Let me explain to you. I have one source database named ( DB1) and Target Database named (DB2). I have 2 schema's named dbs1 and dbs2 in the source database (DB1). I have exported both the database schema in Source Database (DB1) and imported it successfully into the Target Database (DB2). Now I face a challenge in synchronizing these database schema's every time from Source DB (DB1) to Target DB (DB2). Can anyone please help in letting me know how can achieve this synchronization. It would a great help. Thanks Nash

    Read the article

  • Using inheritance with multiple files in Ruby

    - by Preethi Jain
    I am new to Ruby . I have a question with respect to using Inheritence in Ruby . I have a class called as Doggy inside a file named Doggy.rb class Doggy def bark puts "Vicky is barking" end end I have written another class named Puppy in another file named puppy.rb class Puppy < Doggy end puts Doggy.new.bark I am getting this Error: Puppy.rb:1:in `<main>': uninitialized constant Doggy (NameError) Is it mandatory to have these classes (Doggy and Puppy ) inside a single file only? Edited As per the suggestions , i have tried using require and require_relative as shown , but still i am getting below Error Puppy.rb:1:in `<main>': uninitialized constant Doggy (NameError) class Puppy < Doggy end require_relative 'Doggy.rb' puts Doggy.new.bark

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • Nashorn in the Twitterverse, Continued

    - by jlaskey
    After doing the Twitter example, it seemed reasonable to try graphing the result with JavaFX.  At this time the Nashorn project doesn't have an JavaFX shell, so we have to go through some hoops to create an JavaFX application.  I thought showing you some of those hoops might give you some idea about what you can do mixing Nashorn and Java (we'll add a JavaFX shell to the todo list.) First, let's look at the meat of the application.  Here is the repackaged version of the original twitter example. var twitter4j      = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query          = twitter4j.Query; function getTrendingData() {     var twitter = new TwitterFactory().instance;     var query   = new Query("nashorn OR nashornjs");     query.since("2012-11-21");     query.count = 100;     var data = {};     do {         var result = twitter.search(query);         var tweets = result.tweets;         for each (tweet in tweets) {             var date = tweet.createdAt;             var key = (1900 + date.year) + "/" +                       (1 + date.month) + "/" +                       date.date;             data[key] = (data[key] || 0) + 1;         }     } while (query = result.nextQuery());     return data; } Instead of just printing out tweets, getTrendingData tallies "tweets per date" during the sample period (since "2012-11-21", the date "New Project: Nashorn" was posted.)   getTrendingData then returns the resulting tally object. Next, use JavaFX BarChart to display that data. var javafx         = Packages.javafx; var Stage          = javafx.stage.Stage var Scene          = javafx.scene.Scene; var Group          = javafx.scene.Group; var Chart          = javafx.scene.chart.Chart; var FXCollections  = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis   = javafx.scene.chart.CategoryAxis; var NumberAxis     = javafx.scene.chart.NumberAxis; var BarChart       = javafx.scene.chart.BarChart; var XYChart        = javafx.scene.chart.XYChart; var Series         = XYChart.Series; var Data           = XYChart.Data; function graph(stage, data) {     var root = new Group();     stage.scene = new Scene(root);     var dates = Object.keys(data);     var xAxis = new CategoryAxis();     xAxis.categories = FXCollections.observableArrayList(dates);     var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0);     var series = FXCollections.observableArrayList();     for (var date in data) {         series.add(new Data(date, data[date]));     }     var tweets = new Series("Tweets", series);     var barChartData = FXCollections.observableArrayList(tweets);     var chart = new BarChart(xAxis, yAxis, barChartData, 25.0);     root.children.add(chart); } I should point out that there is a lot of subtlety going on in the background.  For example; stage.scene = new Scene(root) is equivalent to stage.setScene(new Scene(root)). If Nashorn can't find a property (scene), then it searches (via Dynalink) for the Java Beans equivalent (setScene.)  Also note, that Nashorn is magically handling the generic class FXCollections.  Finally,  with the call to observableArrayList(dates), Nashorn is automatically converting the JavaScript array dates to a Java collection.  It really is hard to identify which objects are JavaScript and which are Java.  Does it really matter? Okay, with the meat out of the way, let's talk about the hoops. When working with JavaFX, you start with a main subclass of javafx.application.Application.  This class handles the initialization of the JavaFX libraries and the event processing.  This is what I used for this example; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } To initialize Nashorn, we use JSR-223's javax.script.  private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); This code sets up an instance of the Nashorn engine for evaluating scripts. The  load method reads a script into memory and then gets engine to eval that script.  Note, that load also returns the result of the eval. Now for the fun part.  There are several different approaches we could use to communicate between the Java main and the script.  In this example we'll use a Java interface.  The JavaFX main needs to do at least start and stop, so the following will suffice as an interface; public interface Trending {     public void start(Stage stage) throws Exception;     public void stop() throws Exception; } At the end of the example's script we add; (function newTrending() {     return new Packages.Trending() {         start: function(stage) {             var data = getTrendingData();             graph(stage, data);             stage.show();         },         stop: function() {         }     } })(); which instantiates a new subclass instance of Trending and overrides the start and stop methods.  The result of this function call is what is returned to main via the eval. trending = (Trending) load("Trending.js"); To recap, the script Trending.js contains functions getTrendingData, graph and newTrending, plus the call at the end to newTrending.  Back in the Java code, we cast the result of the eval (call to newTrending) to Trending, thus, we end up with an object that we can then use to call back into the script.  trending.start(stage); Voila. ?

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • T-SQL in Chicago – the LobsterPot teams with DataEducation

    - by Rob Farley
    In May, I’ll be in the US. I have board meetings for PASS at the SQLRally event in Dallas, and then I’m going to be spending a bit of time in Chicago. The big news is that while I’m in Chicago (May 14-16), I’m going to teach my “Advanced T-SQL Querying and Reporting: Building Effectiveness” course. This is a course that I’ve been teaching since the 2005 days, and have modified over time for 2008 and 2012. It’s very much my most popular course, and I love teaching it. Let me tell you why. For years, I wrote queries and thought I was good at it. I was a developer. I’d written a lot of C (and other, more fun languages like Prolog and Lisp) at university, and then got into the ‘real world’ and coded in VB, PL/SQL, and so on through to C#, and saw SQL (whichever database system it was) as just a way of getting the data back. I could write a query to return just about whatever data I wanted, and that was good. I was better at it than the people around me, and that helped. (It didn’t help my progression into management, then it just became a frustration, but for the most part, it was good to know that I was good at this particular thing.) But then I discovered the other side of querying – the execution plan. I started to learn about the translation from what I’d written into the plan, and this impacted my query-writing significantly. I look back at the queries I wrote before I understood this, and shudder. I wrote queries that were correct, but often a long way from effective. I’d done query tuning, but had largely done it without considering the plan, just inferring what indexes would help. This is not a performance-tuning course. It’s focused on the T-SQL that you read and write. But performance is a significant and recurring theme. Effective T-SQL has to be about performance – it’s the biggest way that a query becomes effective. There are other aspects too though – such as using constructs better. For example – I can write code that modifies data nicely, but if I haven’t learned about the MERGE statement and the way that it can impact things, I’m missing a few tricks. If you’re going to do this course, a good place to be is the situation I was in a few years before I wrote this course. You’re probably comfortable with writing T-SQL queries. You know how to make a SELECT statement do what you need it to, but feel there has to be a better way. You can write JOINs easily, and understand how to use LEFT JOIN to make sure you don’t filter out rows from the first table, but you’re coding blind. The first module I cover is on Query Execution. Take a look at the Course Outline at Data Education’s website. The first part of the first module is on the components of a SELECT statement (where I make you think harder about GROUP BY than you probably have before), but then we jump straight into Execution Plans. Some stuff on indexes is in there too, as is simplification and SARGability. Some of this is stuff that you may have heard me present on at conferences, but here you have me for three days straight. I’m sure you can imagine that we revisit these topics throughout the rest of the course as well, and you’d be right. In the second and third modules we look at a bunch of other aspects, including some of the T-SQL constructs that lots of people don’t know, and various other things that can help your T-SQL be, well, more effective. I’ve had quite a lot of people do this course and be itching to get back to work even on the first day. That’s not a comment about the jokes I tell, but because people want to look at the queries they run. LobsterPot Solutions is thrilled to be partnering with Data Education to bring this training to Chicago. Visit their website to register for the course. @rob_farley

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • SEO non-English domain name advice

    - by Dominykas Mostauskis
    I'm starting a website, that is meant for a non-English region, using an alphabet that is a bit different than that of English. Current plan is as follows. The website name, and the domain name, will be in the local language (not English); however, domain name will be spelled in the English alphabet, while the website's title will be the same word(s), but spelled properly with accents. E.g.: 'www.litterat.fr' and 'Littérat'. Does the difference between domain name and website name character use influence the site's SEO? Is it better, SEO-wise, to choose a name that can be spelled the same way in the English alphabet? From my experience, when searching online, invariably, the English alphabet is used, no matter the language, so people will still be searching 'litterat' (without accents and such). Edit: To sum up: Things have been said about IDN (Internationalized domain name). To make it simple, they are second-level domain names that contain language specific characters (LSP)(e.g. www.café.fr). Here you can check what top-level domains support what LSPs. Check initall's answer for more info on using LSPs in paths and queries. To answer my question about how and if search engines relate keywords spelled with and without language specific characters: Google can potentially tell that series and séries is the same keyword. However, (most relevant for words that are spelled differently across languages and have different meanings, like séries), for Google to make the connection (or lack thereof) between e and é, it has to deduce two things: Language that you are searching in. Language of your query. You can specify it manually through Advanced search or it guesses it, sometimes. I presume it can guess it wrong too. The more keywords specific to this language you use the higher Google's chance to guess the language. Language of the crawled document, against which the ASCII version of the word will be compared (in this example – series). Again, check initall's answer for how to help Google in understanding what language your document is in. Once it has that it can tell whether or not these two spellings should be treated as the same keyword. Google has to understand that even though you're not using french (in this example) specific characters, you're searching in French. The reason why I used the french word séries in this example, is that it demonstrates this very well. You have it in French and you have it in English without the accent. So if your search query is ambiguous like our series, unless Google has something more to go on, it will presume that there's no correlation between your search and séries in French documents. If you augment your query to series romantiques (try it), Google will understand that you're searching in French and among your results you'll see séries as well. But this does not always work, you should test it out with your keywords first. For example, if you search series francaises, it will associate francaises with françaises, but it will not associate series with séries. It depends on the words. Note: worth stressing that this problem is very relevant to words that, written in plain ASCII, might have some other meanings in other languages, it is less relevant to words that can be, by a distinct margin, just some one language. Tip: I've noticed that sometimes even if my non-accented search query doesn't get associated with the properly spelled word in a document (especially if it's the title or an important keyword in the doc), it still comes up. I followed the link, did a Ctrl-F search for my non-accented search query and found nothing, then checked the meta-tags in the source and you had the page's title in both accented and non-accented forms. So if you have meta-tags that can be spelled with language specific characters and without, put in both. Footnote: I hope this helps. If you have anything to add or correct, go ahead.

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • ClickOnce manifest problem

    - by TWith2Sugars
    We are currently deploying a WPF 4 app via click once and there is a scenario when the installation fails. If the user does not have .Net 4.0 Full install and attempts to install our app the framework installs fine but the app fails to install. If we re-run the installation again the app installs fine. Here is a copy of the log: PLATFORM VERSION INFO Windows : 6.1.7600.0 (Win32NT) Common Language Runtime : 2.0.50727.4927 System.Deployment.dll : 2.0.50727.4927 (NetFXspW7.050727-4900) mscorwks.dll : 2.0.50727.4927 (NetFXspW7.050727-4900) dfdll.dll : 2.0.50727.4927 (NetFXspW7.050727-4900) dfshim.dll : 4.0.31106.0 (Main.031106-0000) SOURCES Deployment url : [URL REMOVED] Server : Apache/2.0.54 Application url : [URL REMOVED] Server : Apache/2.0.54 IDENTITIES Deployment Identity : Graphicly.App.application, Version=0.3.2.0, Culture=neutral, PublicKeyToken=c982228345371fbc, processorArchitecture=msil Application Identity : Graphicly.App.exe, Version=0.3.2.0, Culture=neutral, PublicKeyToken=c982228345371fbc, processorArchitecture=msil, type=win32 APPLICATION SUMMARY * Installable application. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Dependency Graphicly.WCFClient.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.WCFClient.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Microsoft.Surface.Presentation.Design.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Microsoft.Surface.Presentation.Design.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency GalaSoft.MvvmLight.WPF4.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file GalaSoft.MvvmLight.WPF4.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Infrastructure.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Infrastructure.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.AutoUpdater.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.AutoUpdater.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency System.Windows.Interactivity.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file System.Windows.Interactivity.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Microsoft.Surface.Presentation.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Microsoft.Surface.Presentation.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Fonts.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Fonts.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Reader.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Reader.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Microsoft.Surface.Presentation.Generic.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Microsoft.Surface.Presentation.Generic.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Controls.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Controls.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.SocialNetwork.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.SocialNetwork.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Archive.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Archive.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.App.exe cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.App.exe: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency GalaSoft.MvvmLight.Extras.WPF4.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Activation of [URL REMOVED] resulted in exception. Following failure messages were detected: + Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS * The file named Microsoft.Windows.Design.Extensibility.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Ionic.Zip.Reduced.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Newtonsoft.Json.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Microsoft.WindowsAzure.StorageClient.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Dimebrain.TweetSharp.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Microsoft.Windows.Design.Interaction.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named HtmlAgilityPack.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Facebook.dll does not have a hash specified in the manifest. Hash validation will be ignored. OPERATION PROGRESS STATUS * [20/05/2010 09:17:33] : Activation of [URL REMOVED] has started. * [20/05/2010 09:17:38] : Processing of deployment manifest has successfully completed. * [20/05/2010 09:17:38] : Installation of the application has started. * [20/05/2010 09:17:39] : Processing of application manifest has successfully completed. * [20/05/2010 09:17:40] : Request of trust and detection of platform is complete. ERROR DETAILS Following errors were detected during this operation. * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.WCFClient.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Microsoft.Surface.Presentation.Design.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file GalaSoft.MvvmLight.WPF4.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Infrastructure.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.AutoUpdater.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file System.Windows.Interactivity.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Microsoft.Surface.Presentation.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Fonts.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Reader.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Microsoft.Surface.Presentation.Generic.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Controls.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.SocialNetwork.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Archive.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.App.exe: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.DownloadManager.ProcessDownloadedFile(Object sender, DownloadEventArgs e) at System.Deployment.Application.FileDownloader.DownloadModifiedEventHandler.Invoke(Object sender, DownloadEventArgs e) at System.Deployment.Application.FileDownloader.PatchSingleFile(DownloadQueueItem item, Hashtable dependencyTable) at System.Deployment.Application.FileDownloader.PatchFiles(SubscriptionState subState) at System.Deployment.Application.FileDownloader.Download(SubscriptionState subState) at System.Deployment.Application.DownloadManager.DownloadDependencies(SubscriptionState subState, AssemblyManifest deployManifest, AssemblyManifest appManifest, Uri sourceUriBase, String targetDirectory, String group, IDownloadNotification notification, DownloadOptions options) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: COMPONENT STORE TRANSACTION DETAILS No transaction information is available. I'm baffled. Any ideas what this could be? Cheers Tony

    Read the article

  • cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number

    - by Joao Figueiredo
    I've a cron scheduled query which is failing with, File "./run_ora_query.py", line 69, in db_lookup cursor.execute(query, dict(time_key=time_key) ) cx_Oracle.DatabaseError: ORA-01036: illegal variable name/number where >>> dict(time_key=time_key) {'time_key': '12/10/2012 19:12:00'} I'm using a .yaml file to update the last time_key after each query runs, where the relevant parameters are, {query: 'select session_mode, inst_id, user_name, schema_name, os_user, process_id, process_mb_use, process_name, to_char(datet,''dd-mm-yyyy hh24:mi'') as DATETIME from os_admin.mem_usage where data > TO_DATE(:time_key,''dd-mm-yyyy hh24:mi:ss'') order by datet, inst_id, os_user', time_key: '12/10/2012 19:12:00'} Where is the culprit for this error?

    Read the article

  • i get the exception org.hibernate.MappingException: No Dialect mapping for JDBC type: -9

    - by ramesh m
    i am using hibernate .i wrote Native sql query. this query will be execute in sqlSever command promt try { session=HibernateUtil.getInstance().getSession(); transaction=session.beginTransaction(); SQLQuery query = session.createSQLQuery("SELECT AP.PROJECT_NAME, AP.SKILLSET, PA.START_DATE, PA.END_DATE, RS.EMPLOYEE_ID, RS.EMPLOYEE_NAME, RS.REPORTING_PM FROM RESOURCE_MASTER RS,SHARED_PROPOSAL S, ACTUAL_PROPOSAL AP, PROJECT_APPROVED PA, PROJECT_ALLOCATION PL WHERE RS.EMPLOYEE_ID = PL.EMPLOYEE_ID AND PA.PROJECT_ID = PL.PROJECT_ID AND PA.SHARED_PROPOSAL_ID = S.SHARED_PROPOSAL_ID AND S.ACTUAL_PROPOSAL_ID=AP.ACTUAL_PROPOSAL_ID"); List<Object[]> obj=query.list(); Object[] object=new Object[arrayList.size()]; for (int i = 0; i < arrayList.size(); i++) { object[i]=arrayList.get(i); System.out.println(object[i]); } arrayList.get(0); String name=(String)arrayList.get(0); logger.info("In find All searchDeveloper"); }catch(Exception exception) { throw new PPAMException("Contact admin","Problem retrieving resource master list",exception); } like that i am using on that time i got this Exception: org.hibernate.MappingException: No Dialect mapping for JDBC type: -9 this query is executed in sqlserver command propt , i maaped seven tables, but remove ACTUAL_PROPOSAL AP table .it is execute correctly please help me

    Read the article

  • Use ASP.NET 4 Browser Definitions with ASP.NET 3.5

    - by Stephen Walther
    We updated the browser definitions files included with ASP.NET 4 to include information on recent browsers and devices such as Google Chrome and the iPhone. You can use these browser definition files with earlier versions of ASP.NET such as ASP.NET 3.5. The updated browser definition files, and instructions for installing them, can be found here: http://aspnet.codeplex.com/releases/view/41420 The changes in the browser definition files can cause backwards compatibility issues when you upgrade an ASP.NET 3.5 web application to ASP.NET 4. If you encounter compatibility issues, you can install the old browser definition files in your ASP.NET 4 application. The old browser definition files are included in the download file referenced above. What’s New in the ASP.NET 4 Browser Definition Files The complete set of browsers supported by the new ASP.NET 4 browser definition files is represented by the following figure:     If you look carefully at the figure, you’ll notice that we added browser definitions for several types of recent browsers such as Internet Explorer 8, Firefox 3.5, Google Chrome, Opera 10, and Safari 4. Furthermore, notice that we now include browser definitions for several of the most popular mobile devices: BlackBerry, IPhone, IPod, and Windows Mobile (IEMobile). The mobile devices appear in the figure with a purple background color. To improve performance, we removed a whole lot of outdated browser definitions for old cell phones and mobile devices. We also cleaned up the information contained in the browser files. Here are some of the browser features that you can detect: Are you a mobile device? <%=Request.Browser.IsMobileDevice %> Are you an IPhone? <%=Request.Browser.MobileDeviceModel == "IPhone" %> What version of JavaScript do you support? <%=Request.Browser["javascriptversion"] %> What layout engine do you use? <%=Request.Browser["layoutEngine"] %>   Here’s what you would get if you displayed the value of these properties using Internet Explorer 8: Here’s what you get when you use Google Chrome: Testing Browser Settings When working with browser definition files, it is useful to have some way to test the capability information returned when you request a page with different browsers. You can use the following method to return the HttpBrowserCapabilities the corresponds to a particular user agent string and set of browser headers: public HttpBrowserCapabilities GetBrowserCapabilities(string userAgent, NameValueCollection headers) { HttpBrowserCapabilities browserCaps = new HttpBrowserCapabilities(); Hashtable hashtable = new Hashtable(180, StringComparer.OrdinalIgnoreCase); hashtable[string.Empty] = userAgent; // The actual method uses client target browserCaps.Capabilities = hashtable; var capsFactory = new System.Web.Configuration.BrowserCapabilitiesFactory(); capsFactory.ConfigureBrowserCapabilities(headers, browserCaps); capsFactory.ConfigureCustomCapabilities(headers, browserCaps); return browserCaps; } At the end of this blog entry, there is a link to download a simple Visual Studio 2008 project – named Browser Definition Test -- that uses this method to display capability information for arbitrary user agent strings. For example, if you enter the user agent string for an iPhone then you get the results in the following figure: The Browser Definition Test application enables you to submit a user-agent string and display a table of browser capabilities information. The browser definition files contain sample user-agent strings for each browser definition. I got the iPhone user-agent string from the comments in the iphone.browser file. Enumerating Browser Definitions Someone asked in the comments whether or not there is a way to enumerate all of the browser definitions. You can do this if you ware willing to use a little reflection and read a private property. The browser definition files in the config\browsers folder get parsed into a class named BrowserCapabilitesFactory. After you run the aspnet_regbrowsers tool, you can see the source for this class in the config\browser folder by opening a file named BrowserCapsFactory.cs. The BrowserCapabilitiesFactoryBase class has a protected property named BrowserElements that represents a Hashtable of all of the browser definitions. Here's how you can read this protected property and display the ID for all of the browser definitions: var propInfo = typeof(BrowserCapabilitiesFactory).GetProperty("BrowserElements", BindingFlags.NonPublic | BindingFlags.Instance); Hashtable browserDefinitions = (Hashtable)propInfo.GetValue(new BrowserCapabilitiesFactory(), null); foreach (var key in browserDefinitions.Keys) { Response.Write("" + key); } If you run this code using Visual Studio 2008 then you get the following results: You get a huge number of outdated browsers and devices. In all, 449 browser definitions are listed. If you run this code using Visual Studio 2010 then you get the following results: In the case of Visual Studio 2010, all the old browsers and devices have been removed and you get only 19 browser definitions. Conclusion The updated browser definition files included in ASP.NET 4 provide more accurate information for recent browsers and devices. If you would like to test the new browser definitions with different user-agent strings then I recommend that you download the Browser Definition Test project: Browser Definition Test Project

    Read the article

  • ASP.NET MVC Html.BeginRouteForm render's action with problem

    - by Sadegh
    hi, i defined this route: context.MapRoute("SearchEngineWebSearch", "{culture}/{style}/search/web/{query}/{index}/{size}", new { controller = "search", action = "web", query = "", index = 0, size = 5 }, new { index = new UInt32RouteConstraint(), size = new UInt32RouteConstraint() }); and form to post parameter to that: <% using (Html.BeginRouteForm("SearchEngineWebSearch", FormMethod.Post)) { %> <input name="query" type="text" value="<%: ViewData["Query"]%>" class="search-field" /> <input type="submit" value="Search" class="search-button" /> <%} %> but form rendered with problem. why? thanks in advance ;)

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • apache tomcat sermyadmin deployment error

    - by lepricon123
    I am getting the following errro when i try to start the application Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Set web app root system property: 'sermyadmin-production-2.0.1' = [/usr/local/tomcat6/webapps/sermyadmin/] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing log4j from [/usr/local/tomcat6/webapps/sermyadmin/WEB-INF/classes/log4j.properties] Mar 23, 2010 7:51:09 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Mar 23, 2010 7:51:23 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.codehaus.groovy.grails.web.context.GrailsContextLoaderListener org.springframework.beans.factory.access.BootstrapException: Error executing bootstraps; nested exception is org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.codehaus.groovy.grails.web.context.GrailsContextLoader.createWebApplicationContext(GrailsContextLoader.java:66) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413) Caused by: org.codehaus.groovy.runtime.InvokerInvocationException: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4350) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:829) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:718) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:490) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1147) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) ... 2 more Caused by: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query at BootStrap$_closure1.doCall(BootStrap.groovy:7) ... 20 more Caused by: org.hibernate.exception.SQLGrammarException: could not execute query ... 21 more Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'opensips.jsec_role' doesn't exist at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)

    Read the article

  • IQueryable<T> Extension Method not working

    - by Micah
    How can i make an extension method that will work like this public static class Extensions<T> { public static IQueryable<T> Sort(this IQueryable<T> query, string sortField, SortDirection direction) { // System.Type dataSourceType = query.GetType(); //System.Type dataItemType = typeof(object); //if (dataSourceType.HasElementType) //{ // dataItemType = dataSourceType.GetElementType(); //} //else if (dataSourceType.IsGenericType) //{ // dataItemType = dataSourceType.GetGenericArguments()[0]; //} //var fieldType = dataItemType.GetProperty(sortField); if (direction == SortDirection.Ascending) return query.OrderBy(s => s.GetType().GetProperty(sortField)); return query.OrderByDescending(s => s.GetType().GetProperty(sortField)); } } Currently that says "Extension methods must be defined in a non-generic static class". How do i do this?

    Read the article

  • SL3/SL4 - Ado.Net Data Services Error during new DataServiceCollection<T>(queryResponse)

    - by Soulhuntre
    Hey all, I have two functions in a SL project (VS2010) that do almost exactly the same thing, yet one throws an error and the other does not. It seems to be related to the projections, but I am unsure about the best way to resolve. The function that works is... public void LoadAllChunksExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsChunk> data = null; DataServiceQuery<CmsChunk> theQuery = _dataservice .CmsChunks .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsChunk> query = asyncResult.AsyncState as DataServiceQuery<CmsChunk>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsChunk> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsChunk>; data = new DataServiceCollection<CmsChunk>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } This compiles and runs as expected. A very, very similar function (shown below) fails... public void LoadAllPagesExpandAll(DataHelperReturnHandler handler, string orderby) { DataServiceCollection<CmsPage> data = null; DataServiceQuery<CmsPage> theQuery = _dataservice .CmsPages .Expand("CmsChildPages") .Expand("CmsParentPage") .Expand("CmsItemState") .AddQueryOption("$orderby", orderby); theQuery.BeginExecute( delegate(IAsyncResult asyncResult) { _callback_dispatcher.BeginInvoke( () => { try { DataServiceQuery<CmsPage> query = asyncResult.AsyncState as DataServiceQuery<CmsPage>; if (query != null) { //create a tracked DataServiceCollection from the result of the asynchronous query. QueryOperationResponse<CmsPage> queryResponse = query.EndExecute(asyncResult) as QueryOperationResponse<CmsPage>; data = new DataServiceCollection<CmsPage>(queryResponse); handler(data); } } catch { handler(data); } } ); }, theQuery ); } Clearly the issue is the Expand projections that involve a self referencing relationship (pages can contain other pages). This is under SL4 or SL3 using ADONETDataServices SL3 Update CTP3. I am open to any work around or pointers to goo information, a Google search for the error results in two hits, neither particularly helpful that I can decipher. The short error is "An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection." The full error is... System.Reflection.TargetInvocationException was caught Message=Exception has been thrown by the target of an invocation. StackTrace: at System.RuntimeMethodHandle.InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at System.Data.Services.Client.ClientType.ClientProperty.SetValue(Object instance, Object value, String propertyName, Boolean allowAdd) at System.Data.Services.Client.AtomMaterializer.ApplyItemsToCollection(AtomEntry entry, ClientProperty property, IEnumerable items, Uri nextLink, ProjectionPlan continuationPlan) at System.Data.Services.Client.AtomMaterializer.ApplyFeedToCollection(AtomEntry entry, ClientProperty property, AtomFeed feed, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.MaterializeResolvedEntry(AtomEntry entry, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.Materialize(AtomEntry entry, Type expectedEntryType, Boolean includeLinks) at System.Data.Services.Client.AtomMaterializer.DirectMaterializePlan(AtomMaterializer materializer, AtomEntry entry, Type expectedEntryType) at System.Data.Services.Client.AtomMaterializerInvoker.DirectMaterializePlan(Object materializer, Object entry, Type expectedEntryType) at System.Data.Services.Client.ProjectionPlan.Run(AtomMaterializer materializer, AtomEntry entry, Type expectedType) at System.Data.Services.Client.AtomMaterializer.Read() at System.Data.Services.Client.MaterializeAtom.MoveNextInternal() at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Linq.Enumerable.d_b11.MoveNext() at System.Data.Services.Client.DataServiceCollection1.InternalLoadCollection(IEnumerable1 items) at System.Data.Services.Client.DataServiceCollection1.StartTracking(DataServiceContext context, IEnumerable1 items, String entitySet, Func2 entityChanged, Func2 collectionChanged) at System.Data.Services.Client.DataServiceCollection1..ctor(DataServiceContext context, IEnumerable1 items, TrackingMode trackingMode, String entitySetName, Func2 entityChangedCallback, Func2 collectionChangedCallback) at System.Data.Services.Client.DataServiceCollection1..ctor(IEnumerable1 items) at Phinli.Dashboard.Silverlight.Helpers.DataHelper.<>c__DisplayClass44.<>c__DisplayClass46.<LoadAllPagesExpandAll>b__43() InnerException: System.InvalidOperationException Message=An item could not be added to the collection. When items in a DataServiceCollection are tracked by the DataServiceContext, new items cannot be added before items have been loaded into the collection. StackTrace: at System.Data.Services.Client.DataServiceCollection1.InsertItem(Int32 index, T item) at System.Collections.ObjectModel.Collection`1.Add(T item) InnerException: Thanks for any help!

    Read the article

  • WCF security via message headers

    - by exalted
    I'm trying to implement "some sort of" server-client & zero-config security for some WCF service. The best (as well as easiest to me) solution that I found on www is the one described at http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx (client-side) and http://www.dotnetjack.com/post/Processing-custom-WCF-header-values-at-server-side.aspx (corrisponding server-side). Below is my implementation for RequestAuth (descibed in the first link above): using System; using System.Diagnostics; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Dispatcher; using System.ServiceModel.Description; using System.ServiceModel.Channels; namespace AuthLibrary { /// <summary> /// Ref: http://www.dotnetjack.com/post/Automate-passing-valuable-information-in-WCF-headers.aspx /// </summary> public class RequestAuth : BehaviorExtensionElement, IClientMessageInspector, IEndpointBehavior { [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerName = "AuthKey"; [DebuggerBrowsable(DebuggerBrowsableState.Never)] private string headerNamespace = "http://some.url"; public override Type BehaviorType { get { return typeof(RequestAuth); } } protected override object CreateBehavior() { return new RequestAuth(); } #region IClientMessageInspector Members // Keeping in mind that I am SENDING something to the server, // I only need to implement the BeforeSendRequest method public void AfterReceiveReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { throw new NotImplementedException(); } public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel) { MessageHeader<string> header = new MessageHeader<string>(); header.Actor = "Anyone"; header.Content = "TopSecretKey"; //Creating an untyped header to add to the WCF context MessageHeader unTypedHeader = header.GetUntypedHeader(headerName, headerNamespace); //Add the header to the current request request.Headers.Add(unTypedHeader); return null; } #endregion #region IEndpointBehavior Members public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { throw new NotImplementedException(); } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MessageInspectors.Add(this); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { throw new NotImplementedException(); } public void Validate(ServiceEndpoint endpoint) { throw new NotImplementedException(); } #endregion } } So first I put this code in my client WinForms application, but then I had problems signing it, because I had to sign also all third-party references eventhough http://msdn.microsoft.com/en-us/library/h4fa028b(v=VS.80).aspx at section "What Should Not Be Strong-Named" states: In general, you should avoid strong-naming application EXE assemblies. A strongly named application or component cannot reference a weak-named component, so strong-naming an EXE prevents the EXE from referencing weak-named DLLs that are deployed with the application. For this reason, the Visual Studio project system does not strong-name application EXEs. Instead, it strong-names the Application manifest, which internally points to the weak-named application EXE. I expected VS to avoid this problem, but I had no luck there, it complained about all the unsigned references, so I created a separate "WCF Service Library" project inside my solution containing only code above and signed that one. At this point entire solution compiled just okay. And here's my problem: When I fired up "WCF Service Configuration Editor" I was able to add new behavior element extension (say "AuthExtension"), but then when I tried to add that extension to my end point behavior it gives me: Exception has been thrown by the target of an invocation. So I'm stuck here. Any ideas?

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >