Search Results

Search found 14378 results on 576 pages for 'record count'.

Page 405/576 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Limiting the Amount of Tags for Acts as Taggable On

    - by bob
    Hello, I am wondering how to limit the amount of tags, the tag_cloud function returns for this plugin. http://github.com/collectiveidea/acts-as-taggable-on Also, I would like to know how to order it so that it orders the tags by the highest count. So the most popular are at the top. I tried to do @tags = Post.tag_counts_on(:tags, :limit = 5) but that did not work. Controller: class PostController < ApplicationController def tag_cloud @tags = Post.tag_counts_on(:tags) end end View: <% tag_cloud @tags, %w(css1 css2 css3 css4) do |tag, css_class| %> <%= link_to tag.name, { :action => :tag, :id => tag.name }, :class => css_class %> <% end %> Thanks!

    Read the article

  • Grails Testing hickups

    - by egervari
    I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails: void testCount() { mockDomain(UserAccount) new UserAccount(firstName: "Ken").save() new UserAccount(firstName: "Bob").save() new UserAccount(firstName: "Dave").save() assertEquals(3, UserAccount.count()) } For some reason, I get 0 returned back. Did I forget to do something? The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options. Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests..... Help?

    Read the article

  • Error when using SharpDevelop

    - by Sebastian
    I have some code: Outlook.Application outLookApp = new Outlook.Application(); Outlook.Inspector inspector = outLookApp.ActiveInspector(); Outlook.NameSpace nameSpace = outLookApp.GetNamespace("MAPI"); Outlook.MAPIFolder inbox = nameSpace.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderInbox); String sCriteria = "[SenderEmailAddress] = '[email protected]'"; Outlook.Items filteredItems = inbox.Items.Restrict(sCriteria); // totaly sure that count > 0; Outlook.MailItem item = filteredItems[1]; In the last line I have error: "Cannot implicitly convert type 'object' to 'Microsoft.Office.Interop.Outlook.MailItem'. An explicit conversion exists (are you missing a cast?)". I don't know why. Previous I used VisualStudio 2010 but my trial has expired. Is there any hope to run this on SharpDevelop?

    Read the article

  • How do I set the current point in a CG graphics context?

    - by Joe
    When running the code below in the iphone simulator I get the error : CGContextClosePath: no current point. Why is the current point not being set? Or is the context not set to the correct state? CGContextBeginPath(ctx); CGMutablePathRef pathHolder; pathHolder = CGPathCreateMutable(); //move to point for the initial point NSLog(@"Drawing a state point %f, %f", [[holder.points objectAtIndex:0] floatValue], [[holder.points objectAtIndex:1] floatValue]); CGPathMoveToPoint(pathHolder, NULL, [[holder.points objectAtIndex:0] floatValue], [[holder.points objectAtIndex:1] floatValue]); for(int x = 2; x < [holder.points count] - 1; x += 2) { NSLog(@"Drawing a state point %f, %f", [[holder.points objectAtIndex:x] floatValue], [[holder.points objectAtIndex:(x+1)] floatValue]); CGPathAddLineToPoint(pathHolder, NULL, [[holder.points objectAtIndex:x] floatValue], [[holder.points objectAtIndex:(x+1)] floatValue]); } CGContextClosePath(ctx); CGContextFillPath(ctx);

    Read the article

  • Should I throw my own ArgumentOutOfRangeException or let one bubble up from below?

    - by Neil N
    I have a class that wraps List< I have GetValue by index method: public RenderedImageInfo GetValue(int index) { list[index].LastRetrieved = DateTime.Now; return list[index]; } If the user requests an index that is out of range, this will throw an ArgumentOutOfRangeException . Should I just let this happen or check for it and throw my own? i.e. public RenderedImageInfo GetValue(int index) { if (index >= list.Count) { throw new ArgumentOutOfRangeException("index"); } list[index].LastRetrieved = DateTime.Now; return list[index]; } In the first scenario, the user would have an exception from the internal list, which breaks mt OOP goal of the user not needing to know about the underlying objects. But in the second scenario, I feel as though I am adding redundant code. Edit: And now that I think of it, what about a 3rd scenario, where I catch the internal exception, modify it, and rethrow it?

    Read the article

  • SQL query: Delete a entry which is not present in a join table?

    - by Mestika
    Hi, I’m going to delete all users which has no subscription but I seem to run into problems each time I try to detect the users. My schemas look like this: Users = {userid, name} Subscriptionoffering = {userid, subscriptionname} Now, what I’m going to do is to delete all users in the user table there has a count of zero in the subscriptionoffering table. Or said in other words: All users which userid is not present in the subscriptionoffering table. I’ve tried with different queries but with no result. I’ve tried to say where user.userid <> subscriptionoffering.userid, but that doesn’t seem to work. Do anyone know how to create the correct query? Thanks Mestika

    Read the article

  • Code Profiling in the Windows Sidebar Environment

    - by Matt
    Does anyone know of a way I can code-profile my Windows Sidebar Gadget? I've played around with the code-profiling tool in IE8's "Developer Tools" and the code-profiling included in Visual Studio 2010, but I can't find a way to include the System.* API, which my gadget relies on (as it is standard in the Sidebar environment). The gadget also relies on cross-domain AJAX requests; which is normally permitted in the Sidebar environment. By code-profiling I primarily mean: Function call count Function execution time Any ideas would be much appreciated. Regards, Matt

    Read the article

  • window.focus() not working in firefox

    - by Nisanth
    Hi all i am developing a chat application ... i have multiple chat windows ... i want to know which windw contain new message ... i have the following code .. function getCount() { $.ajax({ type: "POST", url: baseUrl + '/Chat/count', data: "chat_id=" + document.ajax.chat_id.value, success: function(msg){ if(msg == 'new1') { self.focus(); } } }); } This code is work fine with IE but not in firefox...please give me a solution... the above code is in the same html

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • How to find NSOutlineView row index when using NSTreeController

    - by velocityb0y
    I'm using an NSTreeController to manage nodes for an NSOutlineView. When the user adds a new item, I create a new object and insert it: EntityViewEntityNode *newNode = [EntityViewEntityNode nodeWithName:@"New entity" entity:newObject]; // Insert at end of group // NSIndexPath *insertAt = [pathOfGroupNode indexPathByAddingIndex:[selected.children count]]; [entityCollectionTreeController insertObject:newNode atArrangedObjectIndexPath:insertAt]; Now I'd like to open the table column for edit so the user can name the new item. This seems logical: NSInteger row = [entityCollectionOutlineView rowForItem:newNode]; [entityCollectionOutlineView editColumn:0 row:row withEvent:nil select:YES]; However, row is always -1 indicating the object isn't found. Poking around reveals that the tree controller is not actually putting my objects directly in the tree, but is wrapping them in a node object of its own. Anyone have insight into how I would go about getting a row index relative to the outline view, so I can do this (without, hopefully, enumerating everything in the outline view and figuring out the mapping back to my node?)

    Read the article

  • Check if the integer in a list is not duplicated, and sequential

    - by prosseek
    testGroupList is a list of integer. I need to check the numbers in testGroupList is sequential (i.e, 1-2-3-4...) and not duplicate numbers. Ignore the negative integer. I implemented it as follows, and it's pretty ugly. Is there any clever way to do this? buff = filter(lambda x: x 0, testGroupList) maxval = max(buff) for i in range(maxval): id = i+1 val = buff.count(id) if val == 1: print id, elif val = 2: print "(Test Group %d duplicated %d times)" % (id, val), elif val == 0: print "(Test Group %d missing)" % id,

    Read the article

  • Is it a good idea to create an STL iterator which is noncopyable?

    - by BillyONeal
    Most of the time, STL iterators are CopyConstructable, because several STL algorithms require this to improve performance, such as std::sort. However, I've been working on a pet project to wrap the FindXFile API (previously asked about), but the problem is it's impossible to implement a copyable iterator around this API. A find handle cannot be duplicated by any means -- DuplicateHandle specifically forbids passing handles to it. And if you just maintain a reference count to the find handle, then a single increment by any copy results in an increment of all copies -- clearly that is not what a copy constructed iterator is supposed to do. Since I can't satisfy the traditional copy constructible requirement for iterators here, is it even worth trying to create an "STL style" iterator? On one hand, creating some other enumeration method is going to not fall into normal STL conventions, but on the other, following STL conventions are going to confuse users of this iterator if they try to CopyConstruct it later. Which is the lesser of two evils?

    Read the article

  • SQL Server 2005 RIGHT OUTER JOIN not working

    - by CheeseConQueso
    I'm looking up access logs for specific courses. I need to show all the courses even if they don't exist in the logs table. Hence the outer join.... but after trying (presumably) all of the variations of LEFT OUTER, RIGHT OUTER, INNER and placement of the tables within the SQL code, I couldn't get my result. Here's what I am running: SELECT (a.first_name+' '+a.last_name) instructor, c.course_id, COUNT(l.access_date) course_logins, a.logins system_logins, MAX(l.access_date) last_course_login, a.last_login last_system_login FROM lsn_logs l RIGHT OUTER JOIN courses c ON l.course_id = c.course_id, accounts a WHERE l.object_id = 'LOGIN' AND c.course_type = 'COURSE' AND c.course_id NOT LIKE '%TEST%' AND a.account_rights > 2 AND l.user_id = a.username AND ((a.first_name+' '+a.last_name) = c.instructor) GROUP BY c.course_id, a.first_name, a.last_name, a.last_login, a.logins, c.instructor ORDER BY a.last_name, a.first_name, c.course_id, course_logins DESC Is it something in the WHERE clause that's preventing me from getting course_id's that don't exist in lsn_logs? Is it the way I'm joining the tables? Again, in short, I want all course_id's regardless of their existence in lsn_logs.

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • Proper NSArray initialization for ivar data in a method

    - by Joost Schuur
    I'm new to Objective-C and iPhone development and have been using Apress' Beginning iPhone 3 Programming book as my main guide for a few weeks now. In a few cases as part of a viewDidLoad: method, ivars like a breadTypes NSArray are initialized like below, with an intermediate array defined and then ultimately set to the actual array like this: NSArray *breadArray = [[NSArray alloc] initWithObjects:@"White", @"Whole Weat", @"Rye", @"Sourdough", @"Seven Grain", nil]; self.breadTypes = breadArray; [breadArray release]; Why is it done this way, instead of simply like this: self.breadTypes = [[NSArray alloc] initWithObjects:@"White", @"Whole Weat", @"Rye", @"Sourdough", @"Seven Grain", nil]; Both seem to work when I compile and run it. Is the 2nd method above not doing proper memory management? I assume initWithObjects: returns an array with a retain count of 1 and I eventually release breadTypes again in the dealloc: method, so that wraps things up nicely. I'm guessing 'self.breadTypes = ...' copies the data to the new array, which is why the original array can be safely released, correct?

    Read the article

  • Getting started in Mac Development; No Mac.... What Do?!

    - by Andrew Bolster
    Hi folks, With all the RDF et al around King Jobs' newest glass beermat, I'm getting the feeling that its a good time to add the 'iphone/ipad developer' string to my bow. One problem (two if you count not being in a financial position to slap down $1000+ for a mac) ; I am a PC head, specifically a Linux-head, and have no regular access to a Mac platform to develop on. I cannot see any way of getting my hands on iSDK without being on a Mac, (but have been known to be wrong) How hopeless is my plight and where can I go from here?

    Read the article

  • Complex queries using Rails query language

    - by Daniel Johnson
    I have a query used for statistical purposes. It breaks down the number of users that have logged-in a given number of times. User has_many installations and installation has a login_count. select total_login as 'logins', count(*) as `users` from (select u.user_id, sum(login_count) as total_login from user u inner join installation i on u.user_id = i.user_id group by u.user_id) g group by total_login; +--------+-------+ | logins | users | +--------+-------+ | 2 | 3 | | 6 | 7 | | 10 | 2 | | 19 | 1 | +--------+-------+ Is there some elegant ActiveRecord style find to obtain this same information? Ideally as a hash collection of logins and users: { 2=>3, 6=>7, ... I know I can use sql directly but wanted to know how this could be solved in rails 3.

    Read the article

  • Spring.NET. How actively is it being developed/supported?

    - by Bubba88
    I've made my choice for the developement components on the .NET platform: Spring.NET for IoC and NHibernate for data access. But how safe is it? I've heard here on SO (don't remember which post exactly) that Spring.NET is on its way down, cause for example the Java original version is developed by a large number of people; but (they say that) the sole person behind the .NET version is Mr. Mark Pollack. Is that so? And, if so, does it still make sense to count on Spring.NET for my production applications? if you have similar information about NHibernate, I'll appreciate it; though it seems that the latter is actively supported.

    Read the article

  • Compiled Linq Queries with Built-in SQL functions

    - by Brandi
    I have a query that I am executing in C# that is taking way too much time: string Query = "SELECT COUNT(HISTORYID) FROM HISTORY WHERE YEAR(CREATEDATE) = YEAR(GETDATE()) "; Query += "AND MONTH(CREATEDATE) = MONTH(GETDATE()) AND DAY(CREATEDATE) = DAY(GETDATE()) AND USERID = '" + EmployeeID + "' "; Query += "AND TYPE = '5'"; I then use SqlCommand Command = new SqlCommand(Query, Connection) and SqlDataReader Reader = Command.ExecuteReader() to read in the data. This is taking over a minute to execute from C#, but is much quicker in SSMS. I see from google searching you can do something with CompiledQuery, but I'm confused whether I can still use the built in SQL functions YEAR, MONTH, DAY, and GETDATE. If anyone can show me an example of how to create and call a compiled query using the built in functions, I will be very grateful! Thanks in advance.

    Read the article

  • PredicateBuilder "And" Method not working

    - by mikemurf22
    I have downloaded the predicate builder and am having a difficult time getting it to work with the entity framework. Here is my code: v_OrderDetail is the entity var context = new OrdersEntities(); Expression<Func<v_OrderDetail,bool>> whereClause = w => true; var predicate = PredicateBuilder.True<v_OrderDetail>(); predicate.And(w => w.Status == "Work"); var results = context.v_OrderDetail.AsExpandable().Where(predicate); When I look at the results I get back every order. The And predicate doesn't seem to take. When I look at the predicate.parameters.count it only shows 1. I'm not sure, but I would expect it to show 2 after I add the second one. Any help is greatly appreciated.

    Read the article

  • set Image to Button

    - by Ivan
    Hello all, Could somebody help me a little bit with my issue below? When I call the myFunction, images which I want to set to buttons appear after 2 sec simultaneously, not one by one with delay of 0.5 sec. More info: generatedNumbers is array with four elements of NSNumber (4,1,3,2) buttons are set in UIView via IB and are tagged (1,2,3,4) -(IBAction) myFunction:(id) sender { int i, value; for (i = 0; i<[generatedNumbers count]; i++) { value = [[generatedNumbers objectAtIndex:i] intValue]; UIButton *button = (UIButton *)[self.view viewWithTag:i+1]; UIImage *img = [UIImage imageNamed:[NSString stringWithFormat:@"%d.png",value]]; [button setImage:img forState:UIControlStateNormal]; [img release]; usleep(500000); } }

    Read the article

  • Calling linux utilities with options from within a Bash script.

    - by Kyle
    This is my first Bash script so forgive me if this question is trivial. I need to count the number of files within a specified directory $HOME/.junk. I thought this would be simple and assumed the following would work: numfiles= find $HOME/.junk -type f | wc -l echo "There are $numfiles files in the .junk directory." Typing find $HOME/.junk -type f | wc -l at the command line works exactly how I expected it to, simply returning the number of files. Why is this not working when it is entered within my script? Am I missing some special notation when it comes to passing options to the utilities? Thank you very much for your time and help.

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

  • DBnull in Linq query causing problems

    - by nat
    hi there i am doing a query thus: int numberInInterval = (from dsStatistics.J2RespondentRow item in jStats.J2Respondent where item.EndTime > dtIntervalLower && item.EndTime <= dtIntervalUpper select item).count(); there appear to be some dbnulls in the endtime column.. any way i can avoid these? tried adding && where item.endtime != null.. and even != dbnull.value do i have to do a second (first) query to grab all that arent null then run the above one? im sure its super simple fix, but im still missing it.. as per thanks nat

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >