Search Results

Search found 28896 results on 1156 pages for 'simple greeter'.

Page 136/1156 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Encryption is hard: AES encryption to Hex

    - by Rob Cameron
    So, I've got an app at work that encrypts a string using ColdFusion. ColdFusion's bulit-in encryption helpers make it pretty simple: encrypt('string_to_encrypt','key','AES','HEX') What I'm trying to do is use Ruby to create the same encrypted string as this ColdFusion script is creating. Unfortunately encryption is the most confusing computer science subject known to man. I found a couple helper methods that use the openssl library and give you a really simple encryption/decryption method. Here's the resulting string: "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" Which looks unicode-ish to me. I've tried several libraries to convert this to hex but they all say it contains invalid characters. Trying to unpack it results in this: string = "\370\354D\020\357A\227\377\261G\333\314\204\361\277\250" string.unpack('U') ArgumentError: malformed UTF-8 character from (irb):19:in `unpack' from (irb):19 At the end of the day it's supposed to look like this (the output of the ColdFusion encrypt method): F8E91A689565ED24541D2A0109F201EF Of course that's assuming that all the padding, initialization vectors, salts, cypher types and a million other possible differences all line up. Here's the simple script I'm using to encrypt/decrypt: def aes(m,k,t) (aes = OpenSSL::Cipher::Cipher.new('aes-256-cbc').send(m)).key = Digest::SHA256.digest(k) aes.update(t) << aes.final end def encrypt(key, text) aes(:encrypt, key, text) end def decrypt(key, text) aes(:decrypt, key, text) end Any help? Maybe just a simple option I can pass to OpenSSL::Cipher::Cipher that will tell it to hex-encode the final string?

    Read the article

  • Microsoft Expression Blend 3 - Show / Hide a popup

    - by imayam
    Hi, I'm using this for the very first time to do some quick prototyping (using sketchflow). I have a simple dialog window that I want to show when a button it pressed and then hide when a button (in the dialog, like an "OK" button) is pressed. If someone could just point me in the direction of a simple tutorial on how to do this I'd be happy, or even if you have a simple example you can post here that would be great (I've been trying to google this forever!). I can tell you what I have tried (although obviously it doesn't work) Created a user control, called it "MyDialog". That user control is a simple box which is the bit of gui I want to overlay when the user clicks a button. In that user control I gave it two states "Show" and "Hide". The "Hide" state has all visability to the elements in this user control set to none and "Show" shows everything Created a button in my main screen. That button I gave a it a behaviour "ActivateStateAction". In the properties of that behaviour I set the TargetScreen to be "MyDialog" and the TargetState to be "Show". (I also set the target screen to be MyprojectName.MyProjectNameScreens.MyDialog, that doesn't work either)

    Read the article

  • Error in WCF service - Silverlight client communication.

    - by David
    I created a WCF service and I planned to consume this in a Silverlight application. So I created the WCF service in the Website host project. The service is a simple WCF service that only returns a number - something like a Hello World WCF-SL. So after adding a service reference in the silverlight client project to the Service URI, after calling async the service method (by using the generated proxy), I get the following exception in the callback method: An error occurred while trying to make a request to URI 'http://localhost:4566/SLService.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. I only created a HelloWorld WCF service with nothing else but a simple method that returns a dumb number and it's hosted on my locally. Must I have clientaccesspolicy.xml or crossdomain.xml? I acces my service locally. Every time I create a new simple/dumb WCF-SL solution, I get this error. I use VS2010 and Silverlight 4. I cannot get a simple/dumb WCF-SL solution working locally. Is there something wrong with the configuration? On another machine in the same network, it does work properly, so I assume something is misconfigured. Any thoughts?

    Read the article

  • Why does my REST request return garbage data?

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • [Repost-ish] Impossibly slow queries, Tables indexed, How can I speed it up?

    - by colorfulgrayscale
    Hi guys, I posted a little earlier on here at http://stackoverflow.com/questions/2656837/query-results-taking-too-long-on-200k-database-speed-up-tips asking about slow executing SQL queries. I was told to index the columns; I did. and its still slow (slow as in, i never see the results, both mysql and sqlite freeze up on query). Help would be greatly appreciated. Here is the SQL SELECT equipment.`unitID` AS `equipment_unitID`, equipment.`fleetCode` AS `equipment_fleetCode`, equipment.type AS equipment_type, equipment.tiremap AS equipment_tiremap, tiremap.`TireID` AS `tiremap_TireID`, tiremap.`WorkMap` AS `tiremap_WorkMap`, tiremap.`Position` AS `tiremap_Position`, tiremap.`DepthMap` AS `tiremap_DepthMap`, tiremap.timestamp AS tiremap_timestamp, workreference.`aMap` AS `workreference_aMap`, workreference.`bMap` AS `workreference_bMap`, tirework.`RO` AS `tirework_RO`, tirework.location AS tirework_location, tirework.mileage AS tirework_mileage, tirework.`mechanicCode` AS `tirework_mechanicCode`, tirework.`partNumber` AS `tirework_partNumber`, tirework.`historyID` AS `tirework_historyID`, tirework.workmap AS tirework_workmap, tirework.timestamp AS tirework_timestamp FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 and here is the EXPLAIN for it id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE equipment ALL tiremap 14079 1 SIMPLE tiremap ref PRIMARY,WorkMap,TireID,WorkMap_2 PRIMARY 52 tire.equipment.tiremap 3 1 SIMPLE workreference ref aMap,bMap aMap 52 tire.tiremap.WorkMap 1 1 SIMPLE tirework eq_ref NewIndex1 NewIndex1 52 tire.workreference.bMap 1

    Read the article

  • Why won't this DOM element disappear?

    - by George Edison
    I have a page that uses jQuery with a small glitch. I managed to get this down to a simple example that demonstrates the problem: <html> <head> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function hideIt() { $('#hideme').fadeOut('slow', function() { $(this).remove(); } ); } </script> </head> <body> <div id='#hideme'>Hide me!</div> <button onclick='hideIt();'>Hide</button> </body> </html> As you would expect, the problem is simple: the caption doesn't disappear. What simple thing did I overlook? (Or if it's not a simple thing, what complicated thing did I miss?)

    Read the article

  • NSPredicate as a constraint solver?

    - by Felixyz
    I'm working on a project which includes some slightly more complex dynamic layout of interface elements than what I'm used to. I always feel stupid writing complex code that checks if so-and-so is close to such-and-such and in that case move it x% in some direction, etc. That's just not how programming should be done. Programming should be as declarative as possible! Precisely because what I'm going to do is fairly simple, I thought it would be a good opportunity to try something new, and I thought of using NSPredicate as a simple constraints solver. I've only used NSPredicate for very simple tasks so far, but I know that it capable of much more. Are there any ideas, experiences, examples, warnings, insights that could be useful here? I'll give a very simple example so there will be something concrete to answer. How could I use NSPredicate to solve the following constraints: viewB.xmid = (viewB.leftEdge + viewB.width) / 2 viewB.xmid = max(300, viewA.rightEdge + 20 + viewB.width/2) ("viewB should be horizontally centered on coordinate 300, unless its left edge gets within 20 pixels of viewB's right edge, in which case viewA's left edge should stay fixed at 20 pixels to the right of viewB's right edge and viewA's horizontal center get pushed to the right.") viewA.rightEdge and viewB.width can vary, and those are the 'input variables'. EDIT: Any solution would probably have to use the NSExpression method -(id)expressionValueWithObject:(id)object context:(NSMutableDictionary *)context. This answer is relevant.

    Read the article

  • Alternate widgets and logic for ManyToManyField with Django forms

    - by Jaearess
    In my Django project, I have a simple ticket system. When creating a ticket, certain users have the ability to assign the ticket to other users, and to email the ticket to other users as well (this is used as an FYI for those users, so they're aware of the ticket, even though it's not assigned to them.) At the moment, the form for adding a ticket is simply the default Django form, with the "assigned_to" and "email_to" fields being ManyToManyFields, and therefore displayed as MultipleSelect widgets, each with a list of all users. Due to the relatively large number of users, and general awkwardness of the MultipleSelect widget, and alternate layout is now required. The desired layout is a pair of simple Select widgets side-by-side. The first has the option of "Assign to" or "Email to" and the second is a list of the users. Essentially, like this: [Assign to] [John Doe] [Email to] [Jane Roe] [Jack Smith], etc. Of course, since an arbitrary number of users can be assigned or emailed a ticket, there's a simple button that runs some Javascript to add another set of widgets, to allow the user to assign and email as many people as they need to. So far all of that is fairly simple and straight forward. However, the problem I have is using this widget setup/logic setup with Django forms. Instead of lists of users to assign to and email, instead we're getting back pairs of information, one a user and the other which list that user should be placed in. What I'm looking for, but have yet to find, is a way to offload the translation between how the user uses the form, and how Django understands the model to the form itself, so I don't have to manually do the processing of the data before passing it to the form in each place this form is used. Additionally, there's a review screen with the option to go back and change the form before submitting it, so a way to have the form translate both to and from this format would be extremely helpful.

    Read the article

  • What do you use to play sound in iPhone games?

    - by zoul
    Hello! I have a performance-intensive iPhone game I would like to add sounds to. There seem to be about three main choices: (1) AVAudioPlayer, (2) Audio Queues and (3) OpenAL. I’d hate to write pages of low-level code just to play a sample, so that I would like to use AVAudioPlayer. The problem is that it seems to kill the performace – I’ve done a simple measuring using CFAbsoluteTimeGetCurrent and the play message seems to take somewhere from 9 to 30 ms to finish. That’s quite miserable, considering that 25 ms == 40 fps. Of course there is the prepareToPlay method that should speed things up. That’s why I wrote a simple class that keeps several AVAudioPlayers at its disposal, prepares them beforehand and then plays the sample using the prepared player. No cigar, still it takes the ~20 ms I mentioned above. Such performance is unusable for games, so what do you use to play sounds with a decent performance on iPhone? Am I doing something wrong with the AVAudioPlayer? Do you play sounds with Audio Queues? (I’ve written something akin to AVAudioPlayer before 2.2 came out and I would love to spare that experience.) Do you use OpenAL? If yes, is there a simple way to play sounds with OpenAL, or do you have to write pages of code? Update: Yes, playing sounds with OpenAL is fairly simple.

    Read the article

  • PHP: tips/resources/patterns for learning to implement a basic ORM

    - by BoltClock
    I've seen various MVC frameworks as well as standalone ORM frameworks for PHP, as well as other ORM questions here; however, most of the questions ask for existing frameworks to get started with, which is not what I'm looking for. (I have also read this SO question but I'm not sure what to make of it, and the answers are vague.) Instead, I figured I'd learn best by getting my hands dirty and actually writing my own ORM, even a simple one. Except I don't really know how to get started, especially since the code I see in other ORMs is so complicated. With my PHP 5.2.x (this is important) MVC framework I have a basic custom database abstraction layer, that has: Very simple methods like connect($host, $user, $pass, $base), query($sql, $binds), etc Subclasses for each DBMS that it supports A class (and respective subclasses) to represent SQL result sets But does not have: Active Record functionality, which I assume is an ORM thing (correct me if I'm wrong) I've read up a little about ORM, and from my understanding they provide a means to further abstract data models from the database itself by representing data as nothing more than PHP-based classes/objects; again, correct me if I am wrong or have missed out in any way. Still, I'd like some simple tips from anyone else who's dabbled more or less with ORM frameworks. Is there anything else I need to take note of, simple example code for me to refer to, or resources I can read? Thanks a lot in advance!

    Read the article

  • Creating ODT and PDF files as end result

    - by Bill Zimmerman
    Hello, I've been working on an app to create various document formats for a while now, and I've had limited success. Ideally, I'd like to dynamically create a fairly simple ODT/PDF/DOC file. I've been focusing my efforts on ODT, because it is editable, and open enough that there are several tools which will convert it to any of the other formats I need. The problem is that the ODT XML files are NOT simple, and there aren't any good-quality API's I could find (especially in python). So far, I've had the most success creating a template ODT file, and then manipulating the DOM in python as needed. This is ok generally, but is quickly becoming inadequate and requires too much tweaking every single time I need to alter one of the templates. The requirements are: 1) Produce a simple document that will have lists, paragraphs, and the ability to draw simple graphics on the page (boxes, circles, etc...) 2) The ability to specify page size, and the different formats should generally print the exact same output when sent to a printer My questions: 1) Are there any other ways I can produce ODT/PDF/DOC files? 2) Would LaTeX be acceptable? I've never really used it, does anyone have experience converting LaTeX files into other formats? 3) Would it be possible to use HTML? There are a lot of converters online. Technically you can specify dimensions in mm/cm, etc..., but I am worried that the printed output will differ between browsers/converters.... Any other ideas?

    Read the article

  • Java plugin framework choice

    - by Marcus
    We're trying to determine how to implement a simple plugin framework for a service we are implementing that allows different types of calculators to be "plugged-in". After reading a number of posts about Java plugin frameworks, it seems like the most common options are: OSGI "Rolling your own" plugin framework The Java Plugin Framework (JPF) The Java Simple Plugin Framework (JSPF) OSGI seems to be more than we need. "Rolling your own" is ok but it would be nice to reuse a common library. So we're down to the JPF and JSPF. JPF seems to not be in active development right now. JSPF seems very simple and really all we need. However I haven't heard much about it. I've only seen one post on StackOverflow about it. Does anyone else have any experience with JSPF? Or any other comments on this design choice? Update: There isn't necessarily a correct answer to this.. however we're going to go with Pavol's idea as we need just a really, really simple solution. Thanks EoH for the nice guide.

    Read the article

  • How to fix: unrecognized selector sent to instance

    - by Matt
    I am having a problem that may be simple to fix, but not simple for me to debug. I simple have a button inside a view in IB; the file's owner is set to the view controller class I am using and when making the connections, everything seems fine, ie. the connector is finding the method I am trying to call etc. however, I am receiving this error: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIApplication getStarted:]: unrecognized selector sent to instance 0x3d19130' My code is as follows: RootViewController.h @interface RootViewController : UIViewController { IBOutlet UIButton* getStartedButton; } @property (nonatomic, retain) UIButton* getStartedButton; - (IBAction) getStarted: (id)sender; @end RootViewController.m #import "RootViewController.h" #import "SimpleDrillDownAppDelegate.h" @implementation RootViewController @synthesize getStartedButton; - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction) getStarted: (id)sender { NSLog(@"Button Tapped!"); //[self.view removeFromSuperview]; } - (void)dealloc { [getStartedButton release]; [super dealloc]; } @end Seems simple enough...any thoughs?

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • MySql product\tag query optimisation - please help!

    - by Nige
    Hi There I have an sql query i am struggling to optimise. It basically is used to pull back products for a shopping cart. The products each have tags attached using a many to many table product_tag and also i pull back a store name from a separate store table. Im using group_concat to get a list of tags for the display (this is why i have the strange groupby orderby clauses at the bottom) and i need to order by dateadded, showing the latest scheduled product first. Here is the query.... SELECT products.*, stores.name, GROUP_CONCAT(tags.taglabel ORDER BY tags.id ASC SEPARATOR " ") taglist FROM (products) JOIN product_tag ON products.id=product_tag.productid JOIN tags ON tags.id=product_tag.tagid JOIN stores ON products.cid=stores.siteid WHERE dateadded < '2010-05-28 07:55:41' GROUP BY products.id ASC ORDER BY products.dateadded DESC LIMIT 2 Unfortunately even with a small set of data (3 tags and about 12 products) the query is taking 00.0034 seconds to run. Eventually i want to have about 2000 products and 50 tagsin this system (im guessing this will be very slooooow). Here is the ExplainSql... id|select_type|table|type|possible_keys|key|key_len|ref|rows|Extra 1|SIMPLE|tags|ALL|PRIMARY|NULL|NULL|NULL|4|Using temporary; Using filesort 1|SIMPLE|product_tag|ref|tagid,productid|tagid|4|cs_final.tags.id|2| 1|SIMPLE|products|eq_ref|PRIMARY,cid|PRIMARY|4|cs_final.product_tag.productid|1|Using where 1|SIMPLE|stores|ALL|siteid|NULL|NULL|NULL|7|Using where; Using join buffer Can anyone help?

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • Using SimpleDB (with SimpleSavant) with POCO / existing entities, not attributes on my classes

    - by alex
    I'm trying to use Simple Savant within my application, to use SimpleDB I currently have (for example) public class Person { public Guid Id { get; set; } public string Name { get; set; } public string Description { get; set; } public DateTime DateOfBirth { get; set; } } To use this with Simple Savant, i'd have to put attributes above the class declaration, and property - [DomainName("Person")] above the class, and [ItemName] above the Id property. I have all my entities in a seperate assembly. I also have my Data access classes an a seperate assembly, and a class factory selects, based on config, the IRepository (in this case, IRepository I want to be able to use my existing simple class - without having attributes on the properties etc.. In case I switch out of simple db, to something else - then I only need to create a different implementation of IRepository. Should I create a "DTO" type class to map the two together? Is there a better way?

    Read the article

  • Voice Recognition Google API

    - by user2966744
    thanks for reading. I'm creating a simple web based drawing app that uses speech recognition. I have created a simple page, the project is on github here: https://github.com/a5hton/speechdraw It has a 16x16 pixel grid. I would like to be able to draw on this grid by using simple words. For example if you say "right", the pixel to the right will be colored black. If you say "down" the pixel below the last one will be colored black. You can say up, down, left or right and the corresponding pixels will be colored. Saying "erase" will switch to erase mode, colouring the pixels back to their original color. Saying "lift" will lift the pen off the page. Saying "draw" will enable the draw mode. Could you please help me work out how to make this happen. Please see the simple page at to get an understanding. Thank you! Cheers, Michael

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • CodePlex Daily Summary for Thursday, March 11, 2010

    CodePlex Daily Summary for Thursday, March 11, 2010New ProjectsASP.NET Wiki Control: This ASP.NET user control allows you to embed a very useful wiki directly into your already existing ASP.NET website taking advantage of the popula...BabyLog: Log baby daily activity.buddyHome: buddyHome is a project that can make your home smarter. as good as your buddy. Cloud Community: Cloud Community makes it easier for organizations to have a simple to use community platform. Our mission is to create an easy to use community pl...Community Connectors for Microsoft CRM 4.0: Community Connectors for Microsoft CRM 4.0 allows Microsoft CRM 4.0 customers and partners to monitor and analyze customers’ interaction from their...Console Highlighter: Hightlights Microsoft Windows Command prompt (cmd.exe) by outputting ANSI VT100 Control sequences to color the output. These sequences are not hand...Cornell Store: This is IN NO WAY officially affiliated or related to the Cornell University store. Instead, this is a project that I am doing for a class. Ther...DevUtilities: This project is for creating some utility tools, and they will be useful during the development.DotNetNuke® Skin Maple: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by DyNNamite.co.uk. The package includes 4 color variations and sev...HRNet: HRNetIIS Web Site Monitoring: A software for monitor a particular web site on IIS, even if its IP is sharing between different web site.Iowa Code Camp: The source code for the Iowa Code Camp website.Leonidas: Leonidas is a virtual tutorLunch 'n Learn: The Lunch 'n Learn web application is an open source ASP.NET MVC application that allows you to setup lunch 'n learn presentations for your team, c...MNT Cryptography: A very simple cryptography classMooiNooi MVC2LINQ2SQL Web Databinder: mvc2linq2sql is a databinder for ASP.NET MVC that make able developer to clean bind object from HTML FORMS to Linq entities. Even 1 to N relations ...MoqBot: MoqBot is an auto mocking library for Moq and Ninject.mtExperience1: hoiMvcPager: MvcPager is a free paging component for ASP.NET MVC web application, it exposes a series of extension methods for using in ASP.NET MVC applications...OCal: OCal is based on object calisthenics to identify code smellsPex Custom Arithmetic Solver: Pex Custom Arithmetic Solver contains a collection of meta-heuristic search algorithms. The goal is to improve Pex's code coverage for code involvi...SetControls: Расширеные контролы для ASP.NET приложений. Полная информация ближе к релизу...shadowrage1597: CTC 195 Game Design classSharePoint Team-Mailer: A SharePoint 2007 solution that defines a generic CustomList for sending e-mails to SharePoint Groups.Sql Share: SQL Share is a collaboration tool used within the science to allow database engineers to work tightly with domain scientists.TechCalendar: Tech Events Calendar ASP.NET project.ZLYScript: A very simple script language compiler.New ReleasesALGLIB: ALGLIB 2.4.0: New ALGLIB release contains: improved versions of several linear algebra algorithms: QR decomposition, matrix inversion, condition number estimatio...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.2: Source codes and a binaryAppFabric Caching UI Admin Tool: AppFabric Caching Beta 2 UI Admin Tool: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)Autodocs - WCF REST Automatic API Documentation Generator: Autodocs.ServiceModel.Web: This archive contains the reference DLL, instructions and license.Compact Plugs & Compact Injection: Compact Injection and Compact Plugs 1.1 Beta: First release of Compact Plugs (CP). The solution includes a simple example project of CP, called "TestCompactPlugs1". Also some fixes where made ...Console Highlighter: Console Highlighter 0.9 (preview release): Preliminary release.Encrypted Notes: Encrypted Notes 1.3: This is the latest version of Encrypted Notes (1.3). It has an installer - it will create a directory 'CPascoe' in My Documents. The last one was ...Family Tree Analyzer: Version 1.0.2: Family Tree Analyzer Version 1.0.2 This early beta version implements loading a gedcom file and displaying some basic reports. These reports inclu...FRC1103 - FRC Dashboard viewer: 2010 Documentation v0.1: This is my current version of the control system documentation for 2010. It isn't complete, but it has the information required for a custom dashbo...jQuery.cssLess: jQuery.cssLess 0.5 (Even less release): NEW - support for nested special CSS classes (like :hover) MAIN RELEASE This release, code "Even less", is the one that will interpret cssLess wit...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL DataBinder: I didn't try this... I just took it off from my project. Please, tell me any problem implementing in your own development and I'll be pleased to h...MvcPager: MvcPager 1.2 for ASP.NET MVC 1.0: MvcPager 1.2 for ASP.NET MVC 1.0Mytrip.Mvc: Mytrip 1.0 preview 1: Article Manager Blog Manager L2S Membership(.NET Framework 3.5) EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.117: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version adds ...Pex Custom Arithmetic Solver: PexCustomArithmeticSolver: This is the alpha release containing the Alternating Variable Method and Evolution Strategies to try and solve constraints over floating point vari...Scrum Sprint Monitor: v1.0.0.44877: What is new in this release? Major performance increase in animations (up to 50 fps from 2 fps) by replacing DropShadow effect with png bitmaps; ...sELedit: sELedit v1.0b: + Added support for empty strings / wstrings + Fixed: critical bug in configuration files (list 53)sPWadmin: pwAdmin v0.9_nightly: + Fixed: XML editor can now open and save character templates + Added: PWI item name database + Added: Plugin SupportTechCalendar: Events Calendar v.1.0: Initial release.The Silverlight Hyper Video Player [http://slhvp.com]: Beta 2: Beta 2.0 Some fixes from Beta 1, and a couple small enhancements. Intensive testing continues, and I will continue to update the code at least ever...ThreadSafe.Caching: 2010.03.10.1: Updates to the scavanging behaviour since last release. Scavenging will now occur every 30 seconds by default and all objects in the cache will be ...VCC: Latest build, v2.1.30310.0: Automatic drop of latest buildVisual Studio DSite: Email Sender (C++): The same Email Sender program that I but made in visual c plus plus 2008 instead of visual basic 2008.Web Forms MVP: Web Forms MVP CTP7: The release can be considered stable, and is in use behind several high traffic, public websites. It has been marked as a CTP release as it is not ...White Tiger: 0.0.3.1: Now you can load or create files with whatever root element you want *check f or sets file permisionsMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsN2 CMSFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >