Search Results

Search found 5382 results on 216 pages for 'bloom filter'.

Page 59/216 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • ArrayAdapter and android:textFilterEnabled

    - by TiGer
    Hi there, I have created a layout which includes a ListView... The data shown within this ListView isn't too complicated, it's mostly an Array which is passed when the extended Activity is started... The rows themselves exist out of an icon and a text, thus an ImageView and a TextView... To fill up the ListView I use an ArrayAdapter simply because an Array is passed containing all the text-items that should be shown.. Now I'd like to actually be able to filter those, thus I found the android:textFilterEnabled paramater to add on the ListView xml declaration... Now a search field is shown nicely but when I enter some letters it won't filter but it will simply delete the whole list... I found out that that's because the textfilter has no idea what it should filter... So now my question is : I know I need to tell the textfilter what it should filter, I also still have my array filled with the text that should get filtered so how do i couple those two ??? I have seen examples extending a CursorAdapter, but again, I don't have a Cursor, I don't want to do calls to a DB I want to re-utilize my Array with data and obviously the ArrayAdapter itself so that the data will be represented decently on screen (i.e with my ImageView and TextView layout)... Thanks in advance for any help/pointers/tips/code...

    Read the article

  • Authorization and Jquery dialog problem.

    - by bbrepols
    Hi, I have a little problem with a Jquery dialog for an action that requires a role. In my example, the user can click on a delete button and must confirm the action. In my controller, the Delete action requires a role, if the user is in the required role, the object is deleted. The problem: How to alert the user if * the element was deleted (redirect to the Index view) * there was an error (alert with the message) * he doesn't have the rights to delete (alert with the message) Before using the authorize filter, the delete action returned a JSON with a Boolean that indicates if there was an error, an URL to redirect on success and a message to alert on error. As I can't return a JSON from my filter, I created an other method with the authorize filter that returns a partial view with the confirm content. If the user doesn't have the rights, the filter returns a partial view with an unauthorized exception content. The problem: How to distinct which partial view was returned. When I create the dialog, I need to know for the buttons function. Thanks!

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • emulator browser not connecting to internet

    - by vnshetty
    my emulator browser not connecting to internet? how to do the settings? <application android:icon="@drawable/icon" android:label="@string/app_name"> <activity android:name=".reader" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> <category android:name="android.intent.category.BROWSABLE" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity> </application> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <supports-screens android:smallScreens="true" android:normalScreens="true" android:largeScreens="true" android:anyDensity="true" /> </manifest>

    Read the article

  • EF Many-to-many dbset.Include in DAL on GenericRepository

    - by Bryant
    I can't get the QueryObjectGraph to add INCLUDE child tables if my life depended on it...what am I missing? Stuck for third day on something that should be simple :-/ DAL: public abstract class RepositoryBase<T> where T : class { private MyLPL2Context dataContext; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) { DatabaseFactory = databaseFactory; dbset = DataContext.Set<T>(); DataContext.Configuration.LazyLoadingEnabled = true; } protected IDatabaseFactory DatabaseFactory { get; private set; } protected MyLPL2Context DataContext { get { return dataContext ?? (dataContext = DatabaseFactory.Get()); } } public IQueryable<T> QueryObjectGraph(Expression<Func<T, bool>> filter, params string[] children) { foreach (var child in children) { dbset.Include(child); } return dbset.Where(filter); } ... DAL repositories public interface IBreed_TranslatedSqlRepository : ISqlRepository<Breed_Translated> { } public class Breed_TranslatedSqlRepository : RepositoryBase<Breed_Translated>, IBreed_TranslatedSqlRepository { public Breed_TranslatedSqlRepository(IDatabaseFactory databaseFactory) : base(databaseFactory) {} } BLL Repo: public IQueryable<Breed_Translated> QueryObjectGraph(Expression<Func<Breed_Translated, bool>> filter, params string[] children) { return _r.QueryObjectGraph(filter, children); } Controller: var breeds1 = _breedTranslatedRepository .QueryObjectGraph(b => b.Culture == culture, new string[] { "AnimalType_Breed" }) .ToList(); I can't get to Breed.AnimalType_Breed.AnimalTypeId ..I can drill as far as Breed.AnimalType_Breed then the intelisense expects an expression? Cues if any, DB Tables: bold is many-to-many Breed, Breed_Translated, AnimalType_Breed, AnimalType, ...

    Read the article

  • Force sending a user to custom QuerySet.

    - by Jack M.
    I'm trying to secure an application so that users can only see objects which are assigned to them. I've got a custom QuerySet which works for this, but I'm trying to find a way to force the use of this additional functionality. Here is my Model: class Inquiry(models.Model): ts = models.DateTimeField(auto_now_add=True) assigned_to_user = models.ForeignKey(User, blank=True, null=True, related_name="assigned_inquiries") objects = CustomQuerySetManager() class QuerySet(QuerySet): def for_user(self, user): return self.filter(assigned_to_user=user) (The CustomQuerySetManager is documented over here, if it is important.) I'm trying to force everything to use this filtering, so that other methods will raise an exception. For example: Inquiry.objects.all() ## Should raise an exception. Inquiry.objects.filter(pk=69) ## Should raise an exception. Inquiry.objects.for_user(request.user).filter(pk=69) ## Should work. inqs = Inquiry.objects.for_user(request.user) ## Should work. inqs.filter(pk=69) ## Should work. It seems to me that there should be a way to force the security of these objects by allowing only certain users to access them. I am not concerned with how this might impact the admin interface.

    Read the article

  • Problem in filtering records using Dataview (C#3.0)

    - by Newbie
    I have a data table . The data table is basically getting populated from excel sheet. And there are many excel sheets. Henceforth, I have written a utility method for accomplishing the same. Now in some of the excel sheets, there are date columns and in some it is not(only text/string). My function is populating the values properly into the datatable from the excell sheet. But there are many blank rows in the excel sheets some are filled with NULL , some with " ". So I need to filter those records (which are NULL or " " ) first before further processing. What I am after is to use a dataview and apply the filter over there. DataView dv = dataTable.DefaultView; dv.RowFilter = ColumnName + " <> ''"; Well by using metedata (GetOleDbSchemaTable(OleDbSchemaGuid.Columns, restrection)) I was able to get the column names from the excel sheet , so getting the column names is not an issue. But the problem is as I said in some Excel sheet there are date fileds some are not. So the Filter condition of the Dataview needs to be proper. If I apply the above logic, and if it encounters a Datafield, it is throwing error Cannot perform '<' operation on System.DateTime and System.String. Could you people please help me out? I need to filter columns(not known at compile time + their data types) which can have NULL and " " I am using C#3.0 Thanks

    Read the article

  • Project euler problem 3 in haskell

    - by shk
    I'm new in Haskell and try to solve 3 problem from http://projecteuler.net/. The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? My solution: import Data.List getD :: Int -> Int getD x = -- find deviders let deriveList = filter (\y -> (x `mod` y) == 0) [1 .. x] filteredList = filter isSimpleNumber deriveList in maximum filteredList -- Check is nmber simple isSimpleNumber :: Int -> Bool isSimpleNumber x = let deriveList = map (\y -> (x `mod` y)) [1 .. x] filterLength = length ( filter (\z -> z == 0) deriveList) in case filterLength of 2 -> True _ -> False I try to run for example: getD 13195 > 29 But when i try: getD 600851475143 I get error Exception: Prelude.maximum: empty list Why? Thank you @Barry Brown, I think i must use: getD :: Integer -> Integer But i get error: Couldn't match expected type `Int' with actual type `Integer' Expected type: [Int] Actual type: [Integer] In the second argument of `filter', namely `deriveList' In the expression: filter isSimpleNumber deriveList Thank you.

    Read the article

  • What language is this???

    - by Misha Koshelev
    Dear All: I thought this was Javascript... but parser is giving me trouble. Any ideas? Thank you! Misha for (;;); {"error":0,"errorSummary":"","errorDescription":"","errorIsWarning":false,"silentError":0,"payload":{"collections": [{"name":"bcm","type":"flp","filter":"flp_662923563701","value":"662923563701","editable":true,"deletable":true,"members": ["1319651388","539562714","710814793","569071038","1553739575","2413243"]}, {"name":"mstp","type":"flp","filter":"flp_715806870131","value":"715806870131","editable":true,"deletable":true,"members": ["1263807225","1159429816","508447486","508005223","1234906348","642723993","552875889","23401888","10701320","8302901","7931988","3007490","1286522890","1128447272","1076062553","775679867","737520202","640799498","224400055","224400048","14211567","7909445","3005965","2404364","218216","660037273","224400089","73306230","9603033","1111694","1034418884","775680513" ,"704526828","518753881","512477182","224400016","24904610","19000876","5403952","3005641","2100348","100000421128298","1445411167","691445174","1020020100","795471177","683724539","682441089","532450522","224400129","224400005","3006522","2246813","1302265","7197","1900494", "100000474978266","2533582","1205125","1384091677","1260996959","710814793","514951289","224400164","224400156","173601800","13304723","7938844","3004783","3001379","302817","716739950 ","706849","1418109424","562676898","82501644","3007569","13173"]}, {"name":"mystery","type":"flp","filter":"flp_687949656211","value":"687949656211","editable":true,"deletable":true,"members": ["100001286464748","508123007","100001161894460","1148567030","1048974191","769992391","831734347","15347","1297180076","756692945","3005266","733396195","34410910","100000940154241"," 748426280","569417581","1318922027","100000164920046","1475269609","1436536592","10000016210 8385","754095305","100000421128298","537833189","100000692471928","7920231","673753496","3006217","1221878","8365333","1128447272","224400133","218216","505457123","1421958541","183829 5926","2349408","1622810085","1201640391","510959992","23429895","542118016","1017385668","586225579","625100539","100000474886633","26404148","1384091677","224400156","806908635","843 187401","400435","768261176","7901808","748496482","1541469473","2511982","25401573","503715 506","1226000844","559195162","41400094","1099436201","409816","1584400985","1577092523","100000349351880","199301581"]},{"name":"SMS Subscriptions","type":"ms","filter":"ms","value":"3","editable":true,"deletable":false,"members":[]}],"current_view":false}}

    Read the article

  • OpenMeetings + Python + Suds

    - by user366774
    Trying to integrate openmeetings with django website, but can't understand how properly configure ImportDoctor: (here :// replaced with __ 'cause spam protection) print url http://sovershenstvo.com.ua:5080/openmeetings/services/UserService?wsdl imp = Import('http__schemas.xmlsoap.org/soap/encoding/') imp.filter.add('http__services.axis.openmeetings.org') imp.filter.add('http__basic.beans.hibernate.app.openmeetings.org/xsd') imp.filter.add('http__basic.beans.data.app.openmeetings.org/xsd') imp.filter.add('http__services.axis.openmeetings.org') d = ImportDoctor(imp) client = Client(url, doctor = d) client.service.getSession() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/site-packages/suds/client.py", line 539, in call return client.invoke(args, kwargs) File "/usr/lib/python2.6/site-packages/suds/client.py", line 598, in invoke result = self.send(msg) File "/usr/lib/python2.6/site-packages/suds/client.py", line 627, in send result = self.succeeded(binding, reply.message) File "/usr/lib/python2.6/site-packages/suds/client.py", line 659, in succeeded r, p = binding.get_reply(self.method, reply) File "/usr/lib/python2.6/site-packages/suds/bindings/binding.py", line 159, in get_reply resolved = rtypes[0].resolve(nobuiltin=True) File "/usr/lib/python2.6/site-packages/suds/xsd/sxbasic.py", line 63, in resolve raise TypeNotFound(qref) suds.TypeNotFound: Type not found: '(Sessiondata, http__basic.beans.hibernate.app.openmeetings.org/xsd, )' what i'm doing wrong? please help and sorry for my english, but you are my last chance to save position :( need webinars at morning (2.26 am now)

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • asp.net MVC ActionMethod causing null exception on parameter

    - by David Liddle
    I am trying to add filter functionality that sends a GET request to the same page to filter records in my table. The problem I am having is that the textbox complains about null object reference when a parameter is not passed. For example, when the person first views the page the url is '/mycontroller/myaction/'. Then when they apply a filter and submit the form it should be something like 'mycontroller/myaction?name=...' Obviously the problem stems from when the name value has not been passed (null) it is still trying to be bound to the 'name' textbox. Any suggestions on what I should do regarding this problem? UPDATE I have tried setting the DefaultValue attribute but I am assuming this is only for route values and not query string values ActionResult MyAction([DefaultValue("")] string name) //Action - /mycontroler/myaction ActionResult MyAction(string name) { ...do stuff } //View <div id="filter"> <% Html.BeginForm("myaction", "mycontroller", FormMethod.Get); %> Name: <%= Html.TextBox("name") %> .... </div> <table>...my list of filtered data

    Read the article

  • jQuery validate plugin radio with optional text

    - by timborden
    I'm trying to figure out how to validate a form element with a mix of radio inputs and a text input: <label>Question?</label> <input type="radio" class="mandatory" name="questions[1][]" value="1" />answer 1<br/> <input type="radio" class="mandatory" name="questions[1][]" value="2" />answer 2<br/> <input class="ignore" type="radio" id="questions[1][]" />Other (please specify)<br/> <input class="optional mandatory" type="text" name="questions[1][]" value="" /> I've figured out how to get the form to behave as expected (select and unselect) with the following code: $("input.optional").focus(function () { var this_name = $(this).attr("name"); $("input:radio").filter(function() {return $(this).attr('name') == this_name; }).attr('checked', false); $("input").filter(function() {return $(this).attr('id') == this_name; }).attr('checked', true); }); $(':radio').click(function () { var this_name = $(this).attr("name"); $("input").filter(function() {return $(this).attr('id') == this_name; }).attr('checked', false); $("input.optional").filter(function() {return $(this).attr('name') == this_name; }).val(''); }); I was hoping I could use the class "mandatory" to validate the mix of radio and text inputs: $("form .manditory").each(function () { $(this).rules("add", {required: true}); }); But it's not working as expected. With the radio (id="questions[1][]") selected, and the text input containing content, the form element is still flagged as invalid. Suggestions...maybe a better approach? Thanks in advance. UPDATE Sorry, I should have clarified that I'm using the validate plugin: $("form").validate({ ... });

    Read the article

  • Filtering null values with pig

    - by arianp
    It looks like a silly problem, but I can´t find a way to filter null values from my rows. This is the result when I dump the object geoinfo: DUMP geoinfo; ([longitude#70.95853,latitude#30.9773]) ([longitude#-9.37944507,latitude#38.91780853]) (null) (null) (null) ([longitude#-92.64416,latitude#16.73326]) (null) (null) ([longitude#-9.15199849,latitude#38.71179122]) ([longitude#-9.15210796,latitude#38.71195131]) here is the description DESCRIBE geoinfo; geoinfo: {geoLocation: bytearray} What I'm trying to do is to filter null values like this: geoinfo_no_nulls = FILTER geoinfo BY geoLocation is not null; but the result remains the same. nothing is filtered. I also tried something like this geoinfo_no_nulls = FILTER geoinfo BY geoLocation != 'null'; and I got an error org.apache.pig.backend.executionengine.ExecException: ERROR 1071: Cannot convert a map to a String What am I doing wrong? details, running on ubuntu, hadoop-1.0.3 with pig 0.9.3 pig -version Apache Pig version 0.9.3-SNAPSHOT (rexported) compiled Oct 24 2012, 19:04:03 java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

    Read the article

  • Convert SVG image with filters to PNG /PDF

    - by user1599669
    I have the following svg image to the png image & pdf image with 300 DPI. <svg width="640" height="480" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"> <!-- Created with SVG-edit - http://svg-edit.googlecode.com/ --> <defs> <filter height="200%" width="200%" y="-50%" x="-50%" id="svg_1_blur"> <feGaussianBlur stdDeviation="10" in="SourceGraphic"/> </filter> </defs> <g> <title>Layer 1</title> <image filter="url(#svg_1_blur)" xlink:href="images/logo.png" id="svg_1" height="162.999996" width="223.999992" y="99" x="185"/> <text xml:space="preserve" text-anchor="middle" font-family="serif" font-size="24" id="svg_2" y="210" x="289" stroke-width="0" stroke="#000000" fill="#000000">sdfdsdsfsdf</text> </g> </svg> I want to do this using PHP and I have applied filters to the blur filter to the image and I want to retain that. Also I have problem in viewing this image in the IE, because it doesn't show the blur effect on IE9. Any suggestions?

    Read the article

  • Distinct() to return List<> returning Duplicates

    - by KDM
    I have a list of Filters that are passed into a webservice and I iterate over the collection and do Linq query and then add to the list of products but when I do a GroupBy and Distinct() it doesn't remove the duplicates. I am using a IEnumerable because when you use Disinct it converts it to IEnumerable. If you know how to construct this better and make my function return a type of List<Product> that would be appreciated thanks. Here is my code in C#: if (Tab == "All-Items") { List<Product> temp = new List<Product>(); List<Product> Products2 = new List<Product>(); foreach (Filter filter in Filters) { List<Product> products = (from p in db.Products where p.Discontinued == false && p.DepartmentId == qDepartment.Id join f in db.Filters on p.Id equals f.ProductId join x in db.ProductImages on p.Id equals x.ProductId where x.Dimension == "180X180" && f.Name == filter.Name /*Filter*/ select new Product { Id = p.Id, Title = p.Title, ShortDescription = p.ShortDescription, Brand = p.Brand, Model = p.Model, Image = x.Path, FriendlyUrl = p.FriendlyUrl, SellPrice = p.SellPrice, DiscountPercentage = p.DiscountPercentage, Votes = p.Votes, TotalRating = p.TotalRating }).ToList<Product>(); foreach (Product p in products) { temp.Add(p); } IEnumerable temp2 = temp.GroupBy(x => x.Id).Distinct(); IEnumerator e = temp.GetEnumerator(); while (e.MoveNext()) { Product c = e.Current as Product; Products2.Add(c); } } pf.Products = Products2;// return type must be List<Product> }

    Read the article

  • Strange DataTable behaviour

    - by DocSnuggles
    There´s a strange behaviour in my code which I can´t explain... I have worked arround it but I´d like to know the reason of this behaviour. My Code: If dataset.Tables.Count > 0 Then Dim rows() As DataRow = dataset.Tables(0).Select("FileID = " + fileid.ToString) For Each row As DataRow In rows... the one and only table in dataset is: 18 1 1 testpara42 19 1 2 Para2 23 2 1 Para1 24 2 2 Para42 Where the colums are: ParamID - FileID - ParaPos - Paravalue. I want do filter it so that the only rows I get are the ones with the correct fileid... the result when the selection string is "FileID = 1" is an array with 2 rows... everything is fine... but the filter string "FileID = 2" will return 0 Rows... A filter like "FileID < 1" returns 2 Rows correctly. I really cant´t explain this od behaviour, maybe one of the community does. The Table does have more entries but the Filter string just works with the "FileID = 1" selection. Excuse my bad english. Please explain my mistake to me.

    Read the article

  • Drop down menu going under #main div background in IE 7/8/9

    - by Adrift
    Here is the link to the domain http://linenwoods.com I am working on. I am going to fit the navigation list items on the header, but when the drop down menu is implemented I'm pretty sure it'll go under the #main div like you see currently. Is there any easy way fix to this? I couldn't find anything related to this from a google search .. was hoping someone could help me out. Below is the relevant CSS .. I tried playing around with z-index with no luck as I was told IE8 renders it strangely. If you have the time please follow the link with IE and leave a response .. I am trying to be as cross-browser compatible as possible and already am at a pretty pathetic start. Any help would be appreciated :) body { background-image:url('Background1.jpg'); background-position: center; height: 100%; margin: 0; padding: 0; opacity: 0.8; filter: alpha(opacity=80); } #main { width : 1010px; height: 1315px; background-color: white; margin-top: 15px; filter: alpha(opacity=80); } header { width: 1010px; height: 230px; background-color: white; margin: 0 auto; margin-top: 15px; filter: alpha(opacity=80); } footer { width: 1010px; height: 230px; background-color: white; margin: 15px 0 15px 0; filter: alpha(opacity=80); }

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • Cannot install or remove packages

    - by Nuno
    I tried to install the openoffice.org package but it gave an error. Now i cannot repair, remove or install nothing. The Software center of xubuntu gives this error log. Nothing seems to solve this. I tried apt-get install -f but it does not solve the problem either. Any suggestions? installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 184189 files and directories currently installed.) Unpacking libreoffice-common (from .../libreoffice-common_1%3a3.5.4-0ubuntu1.1_all.deb) ... dpkg: error processing /var/cache/apt/archives/libreoffice-common_1%3a3.5.4-0ubuntu1.1_all.deb (--unpack): trying to overwrite '/usr/bin/soffice', which is also in package openoffice.org-debian-menus 3.2-9502 No apport report written because MaxReports is reached already rmdir: erro ao remover /var/lib/libreoffice/share/prereg/: Ficheiro ou directoria inexistente rmdir: erro ao remover /var/lib/libreoffice/share/: Directoria no vazia rmdir: erro ao remover /var/lib/libreoffice/program/: Ficheiro ou directoria inexistente rmdir: erro ao remover /var/lib/libreoffice: Directoria no vazia rmdir: erro ao remover /var/lib/libreoffice: Directoria no vazia Processing triggers for desktop-file-utils ... Processing triggers for shared-mime-info ... Processing triggers for gnome-icon-theme ... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/libreoffice-common_1%3a3.5.4-0ubuntu1.1_all.deb dpkg: dependency problems prevent configuration of libreoffice-java-common: libreoffice-java-common depends on libreoffice-common; however: Package libreoffice-common is not installed. dpkg: error processing libreoffice-java-common (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-filter-mobiledev: libreoffice-filter-mobiledev depends on libreoffice-java-common; however: Package libreoffice-java-common is not configured yet. dpkg: error processing libreoffice-filter-mobiledev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-base: libreoffice-base depends on libreoffice-java-common (>= 1:3.5.4~); however: Package libreoffice-java-common is not configured yet. dpkg: error processing libreoffice-base (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-core: libreoffice-core depends on libreoffice-common (>> 1:3.5.4); however: Package libreoffice-common is not installed. dpkg: error processing libreoffice-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-style-human: libreoffice-style-human depends on libreoffice-core; however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-style-human (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-math: libreoffice-math depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-math (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-impress: libreoffice-impress depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-impress (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-style-tango: libreoffice-style-tango depends on libreoffice-core; however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-style-tango (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice: libreoffice depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. libreoffice depends on libreoffice-impress; however: Package libreoffice-impress is not configured yet. libreoffice depends on libreoffice-math; however: Package libreoffice-math is not configured yet. libreoffice depends on libreoffice-base; however: Package libreoffice-base is not configured yet. libreoffice depends on libreoffice-filter-mobiledev; however: Package libreoffice-filter-mobiledev is not configured yet. libreoffice depends on libreoffice-java-common (>= 1:3.5.4~); however: Package libreoffice-java-common is not configured yet. dpkg: error processing libreoffice (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-writer: libreoffice-writer depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-writer (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mythes-en-us: mythes-en-us depends on libreoffice-core | openoffice.org-core (>= 1.9) | language-support-writing-en; however: Package libreoffice-core is not configured yet. Package openoffice.org-core is not installed. Package language-support-writing-en is not installed. dpkg: error processing mythes-en-us (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-help-zh-cn: libreoffice-help-zh-cn depends on libreoffice-writer | language-support-translations-zh; however: Package libreoffice-writer is not configured yet. Package language-support-translations-zh is not installed. dpkg: error processing libreoffice-help-zh-cn (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-base-core: libreoffice-base-core depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-base-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-gnome: libreoffice-gnome depends on libreoffice-core (= 1:3.5.4-0ubuntu1.1); however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-gnome (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-help-pt-br: libreoffice-help-pt-br depends on libreoffice-writer | language-support-translations-pt; however: Package libreoffice-writer is not configured yet. Package language-support-translations-pt is not installed. dpkg: error processing libreoffice-help-pt-br (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-emailmerge: libreoffice-emailmerge depends on libreoffice-core; however: Package libreoffice-core is not configured yet. dpkg: error processing libreoffice-emailmerge (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libreoffice-help-en-gb: libreoffice-help-en-gb depends on libreoffice-writer | language-support-translations-en; however: Package libreoffice-writer is not configured yet. Package language-support-translations-en is not installed. dpkg: error processing libreoffice-help-en-gb (--configure): dependency problems - leaving unconfigured

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >