Search Results

Search found 12435 results on 498 pages for 'pretty printer'.

Page 250/498 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • How to validates cyrilic email in Rails 3.1?

    - by iKeler
    Let's say I had the email address like putin-crab@?????????.?? How to validate that address in rails 3.1? My Model(i use Mongoid): #encoding: utf-8 class User include Mongoid::Document field :email, :type => String validates :email, :presence => true, :format => { :with => RFC822::EMAIL } end For validations reqexp i use gem https://github.com/dim/rfc-822 in rails console (normal email): ruby-1.9.2-p290 :001 > usr = User.new( :email => "[email protected]" ) => #<User _id: 4ec627cf4934db7e4d000001, _type: nil, email: "[email protected]"> ruby-1.9.2-p290 :002 > usr.valid? => true in rails console (fu@#ing email): ruby-1.9.2-p290 :003 > usr = User.new( :email => "putin-crab@?????????.??" ) => #<User _id: 4ec627f44934db7e4d000002, _type: nil, email: "putin-crab@?????????.??"> ruby-1.9.2-p290 :004 > usr.valid? Encoding::CompatibilityError: incompatible encoding regexp match (ASCII-8BIT regexp with UTF-8 string) from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `=~' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `!~' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `validate_each' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:153:in `block in validate' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:150:in `each' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:150:in `validate' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:302:in `_callback_before_13' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:404:in `_run_validate_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:81:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:42:in `block in run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `call' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `run_cascading_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:41:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations.rb:212:in `run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/callbacks.rb:53:in `block in run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:390:in `_run_validation_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:81:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:42:in `block in run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `call' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `run_cascading_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:41:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/callbacks.rb:53:in `run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations.rb:179:in `valid?' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/validations.rb:70:in `valid?' from (irb):4 from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands/console.rb:45:in `start' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands/console.rb:8:in `start' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands.rb:40:in `<top (required)>' from script/rails:6:in `require'

    Read the article

  • How to sanely configure security policy in Tomcat 6

    - by Chas Emerick
    I'm using Tomcat 6.0.24, as packaged for Ubuntu Karmic. The default security policy of Ubuntu's Tomcat package is pretty stringent, but appears straightforward. In /var/lib/tomcat6/conf/policy.d, there are a variety of files that establish default policy. Worth noting at the start: I've not changed the stock tomcat install at all -- no new jars into its common lib directory(ies), no server.xml changes, etc. Putting the .war file in the webapps directory is the only deployment action. the web application I'm deploying fails with thousands of access denials under this default policy (as reported to the log thanks to the -Djava.security.debug="access,stack,failure" system property). turning off the security manager entirely results in no errors whatsoever, and proper app functionality What I'd like to do is add an application-specific security policy file to the policy.d directory, which seems to be the recommended practice. I added this to policy.d/100myapp.policy (as a starting point -- I would like to eventually trim back the granted permissions to only what the app actually needs): grant codeBase "file:${catalina.base}/webapps/ROOT.war" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/lib/-" { permission java.security.AllPermission; }; grant codeBase "file:${catalina.base}/webapps/ROOT/WEB-INF/classes/-" { permission java.security.AllPermission; }; Note the thrashing around attempting to find the right codeBase declaration. I think that's likely my fundamental problem. Anyway, the above (really only the first two grants appear to have any effect) almost works: the thousands of access denials are gone, and I'm left with just one. Relevant stack trace: java.security.AccessControlException: access denied (java.io.FilePermission /var/lib/tomcat6/webapps/ROOT/WEB-INF/classes/com/foo/some-file-here.txt read) java.security.AccessControlContext.checkPermission(AccessControlContext.java:323) java.security.AccessController.checkPermission(AccessController.java:546) java.lang.SecurityManager.checkPermission(SecurityManager.java:532) java.lang.SecurityManager.checkRead(SecurityManager.java:871) java.io.File.exists(File.java:731) org.apache.naming.resources.FileDirContext.file(FileDirContext.java:785) org.apache.naming.resources.FileDirContext.lookup(FileDirContext.java:206) org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:299) org.apache.catalina.loader.WebappClassLoader.findResourceInternal(WebappClassLoader.java:1937) org.apache.catalina.loader.WebappClassLoader.findResource(WebappClassLoader.java:973) org.apache.catalina.loader.WebappClassLoader.getResource(WebappClassLoader.java:1108) java.lang.ClassLoader.getResource(ClassLoader.java:973) I'm pretty convinced that the actual file that's triggering the denial is irrelevant -- it's just some properties file that we check for optional configuration parameters. What's interesting is that: it doesn't exist in this context the fact that the file doesn't exist ends up throwing a security exception, rather than java.io.File.exists() simply returning false (although I suppose that's just a matter of the semantics of the read permission). Another workaround (besides just disabling the security manager in tomcat) is to add an open-ended permission to my policy file: grant { permission java.security.AllPermission; }; I presume this is functionally equivalent to turning off the security manager. I suppose I must be getting the codeBase declaration in my grants subtly wrong, but I'm not seeing it at the moment.

    Read the article

  • Technology to communicate with someone with expressive aphasia?

    - by rascher
    A family member had a stroke a few years back and now has expressive aphasia. She understands what is said to her, is cognitive of what is going on, but cannot express herself. She is able to respond to yes/no questions (do you want to go shopping? are you looking for your earrings?) She is not, however, able to read (English is not her native language and she hasn't read Hindi for decades.) I am the technologist in the family, and I intend to come up with something to help us communicate. The idea is to have some sort of picture book where she can point to what she wants. My first question: does some sort of assistive technology for people with expressive aphasia already exist? These can be hardware or software devices? If not, then such a software doesn't seem difficult to write. My initial thought is to have an interface with pictures - maybe separated by category (food, shopping) - where she can point to an individual picture to indicate what she needs. We could easily add more items with such a software, and we could have an interface where she (or we) could "flip pages". Which suggests that the best solution would use a touch screen rather than a mouse. It would be really difficult to train her to aim a mouse or find keys on a keyboard. We're thinking of maybe getting a tablet and writing some basic software. But tablets computers are expensive and fragile - I'm not sure if it would be able to stand spills or being knocked about in a nursing home. So my next question: what kind of tablet-like devices are out there which I can program on? I don't know anything about hardware, but if there is something then we could special-order it. What would be safe and durable for such a project? We could do something on an iPod or cell phone, but I feel like that interface would be too small. Finally, does anyone here have experience with this kind of assistive technology? Things I might not anticipate when designing such a system? edit I've added a (pretty hefty!) bounty. I'd kinda like to open this question up to any suggestions, comments, and experiences that people might have. This is a pretty real and important project, so while we will (are working on) a solution, any insights would be particularly helpful. Right now the plan is to mount a screen in her room. We'll either teach her to use a trackball or use a touch-screen panel, after seeing what she is able to use with a simple prototype. Then software akin to an old "hypercard" deck: ---------------------------------------------------------------- | -------------- -------------- | | | Clothes | | Food | ... | | -------------- -------------- | | | | Pic of item 1 Pic of item 2 Pic of item 3 | | | | | | | | | | Pic of item 4 Pic of item 5 Pic of item 6 | | | | | | | | | | <-Back Next-> | ---------------------------------------------------------------- commentcommentcomment!

    Read the article

  • How do you stop scripters from slamming your website hundreds of times a second?

    - by davebug
    [update] I've accepted an answer, as lc deserves the bounty due to the well thought-out answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible for us to actually track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey bit of hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHAing everyone is an alternative. I'll attempt to do a more full explanation in here later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki). I've added bounty to this question and attempted to explain why the current answers don't fit our needs. First, though, thanks to all of you who have thought about this, it's amazing to have this collective intelligence to help work through seemingly impossible problems. I'll be a little more clear than I was before: This is about the bag o' crap sales on woot.com. I'm the president of Woot Workshop, the subsidiary of Woot that does the design, writes the product descriptions, podcasts, blog posts, and moderates the forums. I work in the css/html world and am only barely familiar with the rest of the developer world. I work closely with the developers and have talked through all of the answers here (and many other ideas we've had). Usability of the site is a massive part of my job, and making the site exciting and fun is most of the rest of it. That's where the three goals below derive. CAPTCHA harms usability, and bots steal the fun and excitement out of our crap sales. To set up the scenario a little more, bots are slamming our front page tens of times a second screenscraping (and/or scanning our rss) for the Random Crap sale. The moment they see that, it triggers a second stage of the program that logs in, clicks I want One, fills out the form, and buys the crap. In current (2/6/2009) order of votes: lc: On stackoverflow and other sites that use this method, they're almost always dealing with authenticated (logged in) users, because the task being attempted requires that. On Woot, anonymous (non-logged) users can view our home page. In other words, the slamming bots can be non-authenticated (and essentially non-trackable except by IP address). So we're back to scanning for IPs, which a) is fairly useless in this age of cloud networking and spambot zombies and b) catches too many innocents given the number of businesses that come from one IP address (not to mention the issues with non-static IP ISPs and potential performance hits to trying to track this). Oh, and having people call us would be the worst possible scenario. Can we have them call you? BradC Ned Batchelder's methods look pretty cool, but they're pretty firmly designed to defeat bots built for a network of sites. Our problem is bots are built specifically to defeat our site. Some of these methods could likely work for a short time until the scripters evolved their bots to ignore the honeypot, screenscrape for nearby label names instead of form ids, and use a javascript-capable browser control. lc again "Unless, of course, the hype is part of you

    Read the article

  • Pagination number elements do not work - jquery

    - by ClarkSKent
    Hello, I am trying to get my pagination links to work. It seems when I click any of the pagination number links to go the next page, the new content does not load. literally nothing happens and when looking at the console in Firebug, nothing is sent or loaded. I have on the main page 3 links to filter the content and display it. When any of these links are clicked the results are loaded and displayed along with the associated pagination numbers for that specific content. Here is the main page so you can see what the structure looks like for the jquery: <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php generate_pagination($sql) ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> The jquery below is pretty simple and I don't think I need to explain what it does. I believe the problem may be with the $("#pagination li").click(function(){ since the li elements are the numbers that do not work when clicked. Even when I try to fadeOut or hide the content on click nothing happens. I'm pretty new to the jquery structure so I do not fully understand where the real problem is occurring, this is just from my observation. The jquery file looks like this: $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = this.id; $("#content").load("pagination_data.php?page=" + pageNum, function(){ $(this).attr('data-page', pageNum); Hide_Load(); }); }); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); }); If anyone could help me on this that would be great, Thanks. EDIT: Here are the innards of <ul id="pagination">: <?php function generate_pagination($sql) { include_once('config.php'); $per_page = 3; //Calculating no of pages $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="'.$i.'">'.$i.'</li>'; } } $ids=$_GET['ids']; generate_pagination("SELECT COUNT(*) FROM explore WHERE category='$ids'"); ?>

    Read the article

  • surfaceDestroyed called out of turn

    - by Avasulthiris
    I'm currently developing on minimum sdk version 3 (Android 1.5 - cupcake) and I'm having a strange unexplained issue that I have not been able to solve on my own. It is now becoming a rather urgent issue as I've already missed 1 deadline... I'm writing a high-level library to make long term android development easier and quicker. The one specific module has to capture images for a application... I've gotten everything right so far over the last couple months, except this one little thing and I don't know what to do any more: When I use the Camera object and implement a SurfaceHolder.Callback, the methods surfaceCreated() and surfaceChanged() are called one after the other. Then when the activity finishes, surfaceDestroyed() is called. This is how it should be, but when I stick the exact same code in my library (plain Java library that references the Android API - not in an activity), surfaceDestroyed() is called directly after created and changed. As a result - the camera object is closed before I can use it and the application force closes. What a pain. I can't do anything! This method call is controlled by the device.. Why does the surface close for no reason? Even when I post it to run on the activity thread through my own invokeAndWait(Runnable) method, like I do for many other things. I have 5 different working examples of different ways and implementations of capturing images in android but I still get the same issue when I plug it into my library. I don't understand what the difference is. The code is pretty much the same - and I post all the related code to the UI thread so its not a thread handling issue or anything like that. I've rewritten it about 20 times in different ways - same issue every time.. The only other way to approach it that I know of is creating a new Camera and setting it to the VideoView. The android source (c++ native code) however provides no Camera constructor, only an open() method which automatically forwards the camera's state to 'prepared' but I can only set the camera to the VideoView from the 'initialized' state. Pretty silly, I know, but there is no way around it unless I modify the Android library source code haha. not an option! The API does not allow for this method - you are expected to use it like my first example. So essentially - i just need to understand exactly why surfaceDestroyed() is called out of turn and if there is anything I can do to avoid it closing? If i can just understand the exact logic behind it and how it works! The documentation isn't much help! Secondly, if someone knows of any alternative ways to do it, as my second example, but hopefully one which the API actually allows for? haha Thanks guys. I would post code, but its fairly complicated, a couple thousand lines for this specific class and it would probably take a couple days to explain with all the threading and event listeners and what not. I just need help with this 1 single thing. Please let me know if you have any questions.

    Read the article

  • Why do I get a segmentation fault while redirecting sys.stdout to Tkinter.Text widget in Python?

    - by Brent Nash
    I'm in the process of building a GUI-based application with Python/Tkinter that builds on top of the existing Python bdb module. In this application, I want to silence all stdout/stderr from the console and redirect it to my GUI. To accomplish this purpose, I've written a specialized Tkinter.Text object (code at the end of the post). The basic idea is that when something is written to sys.stdout, it shows up as a line in the "Text" with the color black. If something is written to sys.stderr, it shows up as a line in the "Text" with the color red. As soon as something is written, the Text always scrolls down to view the most recent line. I'm using Python 2.6.1 at the moment. On Mac OS X 10.5, this seems to work great. I have had zero problems with it. On RedHat Enterprise Linux 5, however, I pretty reliably get a segmentation fault during the run of a script. The segmentation fault doesn't always occur in the same place, but it pretty much always occurs. If I comment out the sys.stdout= and sys.stderr= lines from my code, the segmentation faults seem to go away. I'm sure there are other ways around this that I will probably have to resort to, but can anyone see anything I'm doing blatantly wrong here that could be causing these segmentation faults? It's driving me nuts. Thanks! PS - I realize redirecting sys.stderr to the GUI might not be a great idea, but I still get segmentation faults even when I only redirect sys.stdout and not sys.stderr. I also realize that I'm allowing the Text to grow indefinitely at the moment. class ConsoleText(tk.Text): '''A Tkinter Text widget that provides a scrolling display of console stderr and stdout.''' class IORedirector(object): '''A general class for redirecting I/O to this Text widget.''' def __init__(self,text_area): self.text_area = text_area class StdoutRedirector(IORedirector): '''A class for redirecting stdout to this Text widget.''' def write(self,str): self.text_area.write(str,False) class StderrRedirector(IORedirector): '''A class for redirecting stderr to this Text widget.''' def write(self,str): self.text_area.write(str,True) def __init__(self, master=None, cnf={}, **kw): '''See the __init__ for Tkinter.Text for most of this stuff.''' tk.Text.__init__(self, master, cnf, **kw) self.started = False self.write_lock = threading.Lock() self.tag_configure('STDOUT',background='white',foreground='black') self.tag_configure('STDERR',background='white',foreground='red') self.config(state=tk.DISABLED) def start(self): if self.started: return self.started = True self.original_stdout = sys.stdout self.original_stderr = sys.stderr stdout_redirector = ConsoleText.StdoutRedirector(self) stderr_redirector = ConsoleText.StderrRedirector(self) sys.stdout = stdout_redirector sys.stderr = stderr_redirector def stop(self): if not self.started: return self.started = False sys.stdout = self.original_stdout sys.stderr = self.original_stderr def write(self,val,is_stderr=False): #Fun Fact: The way Tkinter Text objects work is that if they're disabled, #you can't write into them AT ALL (via the GUI or programatically). Since we want them #disabled for the user, we have to set them to NORMAL (a.k.a. ENABLED), write to them, #then set their state back to DISABLED. self.write_lock.acquire() self.config(state=tk.NORMAL) self.insert('end',val,'STDERR' if is_stderr else 'STDOUT') self.see('end') self.config(state=tk.DISABLED) self.write_lock.release()

    Read the article

  • C - circular character buffer w/ pthreads

    - by Matt
    I have a homework assignment where I have to implement a circular buffer and add and remove chars with separate threads: #include <pthread.h> #include <stdio.h> #define QSIZE 10 pthread_cond_t full,/* count == QSIZE */ empty,/* count == 0 */ ready; pthread_mutex_t m, n; /* implements critical section */ unsigned int iBuf, /* tail of circular queue */ oBuf; /* head of circular queue */ int count; /* count characters */ char buf [QSIZE]; /* the circular queue */ void Put(char s[]) {/* add "ch"; wait if full */ pthread_mutex_lock(&m); int size = sizeof(s)/sizeof(char); printf("size: %d", size); int i; for(i = 0; i < size; i++) { while (count >= QSIZE) pthread_cond_wait(&full, &m);/* is there empty slot? */ buf[iBuf] = s[i]; /* store the character */ iBuf = (iBuf+1) % QSIZE; /* increment mod QSIZE */ count++; if (count == 1) pthread_cond_signal(&empty);/* new character available */ } pthread_mutex_unlock(&m); } char Get() {/* remove "ch" from queue; wait if empty */ char ch; pthread_mutex_lock(&m); while (count <= 0) pthread_cond_wait(&empty, &m);/* is a character present? */ ch = buf[oBuf]; /* retrieve from the head of the queue */ oBuf = (oBuf+1) % QSIZE; count--; if (count == QSIZE-1) pthread_cond_signal(&full);/* signal existence of a slot */ pthread_mutex_unlock(&m); return ch; } void * p1(void *arg) { int i; for (i = 0; i < 5; i++) { Put("hella"); } } void * p2(void *arg) { int i; for (i = 0; i < 5; i++) { Put("goodby"); } } int main() { pthread_t t1, t2; void *r1, *r2; oBuf = 0; iBuf = 0; count=0; /* all slots are empty */ pthread_cond_init(&full, NULL); pthread_cond_init(&empty, NULL); pthread_mutex_init(&m, NULL); pthread_create(&t1, NULL, p1, &r1); pthread_create(&t2, NULL, p2, &r2); printf("Main"); char c; int i = 0; while (i < 55) { c = Get(); printf("%c",c); i++; } pthread_join(t1, &r1); pthread_join(t2, &r2); return 0; } I shouldn't have to change the logic much at all, the requirements are pretty specific. I think my problem lies in the Put() method. I think the first thread is going in and blocking the critical section and causing a deadlock. I was thinking I should make a scheduling attribute? Of course I could be wrong. I am pretty new to pthreads and concurrent programming, so I could really use some help spotting my error.

    Read the article

  • Recommended integration mechanism for bi-directional, authenticated, encrypted connection in C clien

    - by rcampbell
    Let me first give an example. Imagine you have a single server running a JVM application. This server keeps a collection of N equations, once for each client: Client #1: 2x Client #2: 1 + y Client #3: z/4 This server includes an HTTP interface so that random visitors can type https://www.acme.com/client/3 int their browsers and see the latest evaluated result of z/4. The tricky part is that either the client or the server may change the variable value at any time, informing the other party immediately. More specifically, Client #3 - a C app - can initially tell the server that z = 20. An hour later that same client informs the server that z = 23. Likewise the server can later inform the client that z = 28. As caf pointed out in the comments, there can be a race condition when values are changed by the client and server simultaneously. The solution would be for both client and server to send the operation performed in their message, which would need to be executed by the other party. To keep things simple, let's limit the operations to (commutative) addition, allowing us to disregard message ordering. For example, the client seeds the server with z = 20: server:z=20, client:z=20 server sends {+3} message (so z=23 locally) & client sends {-2} message (so z=18 locally) at the exact same time server receives {-2} message at some point, adds to his local copy so z=21 client receives {+3} message at some point, adds to his local copy so z=21 As long as all messages are eventually evaluated by both parties, the correct answer will eventually be given to the users of the client and server since we limited ourselves to commutative operations (addition of 3 and -2). This does mean that both client and server can be returning incorrect answers in the time it takes for messages to be exchanged and processed. While undesirable, I believe this is unavoidable. Some possible implementations of this idea include: Open an encrypted, always on TCP socket connection for communication Pros: no additional infrastructure needed, client and server know immediately if there is a problem (disconnect) with the other party, fairly straightforward (except the the encryption), native support from both JVM and C platforms Cons: pretty low-level so you end up writing a lot yourself (protocol, delivery verification, retry-on-failure logic), probably have a lot of firewall headaches during client app installation Asynchronous messaging (ex: ActiveMQ) Pros: transactional, both C & Java integration, free up the client and server apps from needing retry logic or delivery verification, pretty straightforward encryption, easy extensibility via message filters/routers/etc Cons: need additional infrastructure (message server) which must never fail, Database or file system as asynchronous integration point Same pros/cons as above but messier RESTful Web Service Pros: simple, possible reuse of the server's existing REST API, SSL figures out the encryption problem for you (maybe use RSA key a la GitHub for authentication?) Cons: Client now needs to run a C HTTP REST server w/SSL, client and server need retry logic. Axis2 has both a Java and C version, but you may be limited to SOAP. What other techniques should I be evaluating? What real world experiences have you had with these mechanisms? Which do you recommend for this problem and why?

    Read the article

  • Looking for best practise for writing a serial device communication app in C#

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.serialport, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connectio to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data recieved, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the gloabl variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flakey.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the datarecieved method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe readline will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

  • Looking for best practise for writing a serial device communication app

    - by cdotlister
    I am pretty new to serial comms, but would like advise on how to best achieve a robust application which speak to and listens to a serial device. I have managed to make use of System.IO.SerialPort, and successfully connected to, sent data to and recieved from my device. The way things work is this. My application connects to the Com Port and opens the port.... I then connect my device to the com port, and it detects a connection to the PC, so sends a bit of text. it's really just copyright info, as well as the version of the firmware. I don't do anything with that, except display it in my 'activity' window. The device then waits. I can then query information, but sending a command such as 'QUERY PARAMETER1'. It then replies with something like: 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n' I then process that. I can then update it by sending 'SET PARAMETER1 12345', and it will reply with 'QUERY PARAMETER1\r\n\r\n12345\r\n\r\n'. All pretty basic. So, what I have done is created a Communication Class. this call is called in it's own thread, and sends data back to the main form... and also allows me to send messages to it. Sending data is easy. Recieving is a bit more tricky. I have employed the use of the datarecieved event, and when ever data comes in, I echo that to my screen. My problem is this: When I send a command, I feel I am being very dodgy in my handling. What I am doing is, lets say I am sending 'QUERY PARAMETER1'. I send the command to the device, I then put 'PARAMETER1' into a global variable, and I do a Thread.Sleep(100). On the data received, I then have a bit of logic that checks the incoming data, and sees if the string CONTAINS the value in the global variable. As the reply may be 'QUERY PARAMETER1\r\n\r\n76767\r\n\r\n', it sees that it contains my parameter, parses the string, and returns the value I am looking for, but placing it into another global variable. My sending method was sleeping for 100ms. It then wakes, and checks the returned global variable. If it has data... then I'm happy, and I process the data. Problem is... if the sleep is too short.. it will fail. And I feel it's flaky.. putting stuff into variables.. then waiting... The other option is to use ReadLine instead, but that's very blocking. So I remove the data received method, and instead... just send the data... then call ReadLine(). That may give me better results. There's no time, except when we connect initially, that data comes from the device, without me requesting it. So, maybe ReadLine will be simpler and safer? Is this known as 'Blocking' reads? Also, can I set a timeout? Hopefully someone can guide me.

    Read the article

  • What are the principles of developing web-applications with action-based java frameworks?

    - by Roman
    Background I'm going to develop a new web-application with java. It's not very big or very complex and I have enough time until it'll "officially" start. I have some JSF/Facelets development background (about half a year). And I also have some expirience with JSP+JSTL. In self-educational purpose (and also in order to find the best solution) I want to prototype the new project with one of action-based frameworks. Actually, I will choose between Spring MVC and Stripes. Problem In order to get correct impression about action-based frameworks (in comparison with JSF) I want to be sure that I use them correctly (in bigger or lesser extent). So, here I list some most-frequent tasks (at least for me) and describe how I solve them with JSF. I want to know how they should be solved with action-based framework (or separately with Spring MVC and Stripes if there is any difference for concrete task). Rendering content: I can apply ready-to-use component from standard jsf libraries (core and html) or from 3rd-party libs (like RichFaces). I can combine simple components and I can easily create my own components which are based on standard components. Rendering data (primitive or reference types) in the correct format: Each component allow to specify a converter for transforming data in both ways (to render and to send to the server). Converter is, as usual, a simple class with 2 small methods. Site navigation: I specify a set of navigation-cases in faces-config.xml. Then I specify action-attribute of a link (or a button) which should match one or more of navigation cases. The best match is choosen by JSF. Implementing flow (multiform wizards for example): I'm using JSF 1.2 so I use Apache Orchestra for the flow (conversation) scope. Form processing: I have a pretty standard java-bean (backing bean in JSF terms) with some scope. I 'map' form fields on this bean properties. If everything goes well (no exceptions and validation is passed) then all these properties are set with values from the form fields. Then I can call one method (specified in button's action attribute) to execute some logic and return string which should much one of my navigation cases to go to the next screen. Forms validation: I can create custom validator (or choose from existing) and add it to almost each component. 3rd-party libraries have sets of custom ajax-validators. Standard validators work only after page is submitted. Actually, I don't like how validation in JSF works. Too much magic there. Many standard components (or maybe all of them) have predefined validation and it's impossible to disable it (Maybe not always, but I met many problems with it). Ajax support: many 3rd-party libraries (MyFaces, IceFaces, OpenFaces, AnotherPrefixFaces...) have strong ajax support and it works pretty well. Until you meet a problem. Too much magic there as well. It's very difficult to make it work if it doesn't work but you've done right as it's described in the manual. User-friendly URLs: people say that there are some libraries for that exist. And it can be done with filters as well. But I've never tried. It seems too complex for the first look. Thanks in advance for explaning how these items (or some of them) can be done with action-based framework.

    Read the article

  • How can I keep my MVC Views, models, and model binders as clean as possible?

    - by MBonig
    I'm rather new to MVC and as I'm getting into the whole framework more and more I'm finding the modelbinders are becoming tough to maintain. Let me explain... I am writing a basic CRUD-over-database app. My domain models are going to be very rich. In an attempt to keep my controllers as thin as possible I've set it up so that on Create/Edit commands the parameter for the action is a richly populated instance of my domain model. To do this I've implemented a custom model binder. As a result, though, this custom model binder is very specific to the view and the model. I've decided to just override the DefaultModelBinder that ships with MVC 2. In the case where the field being bound to my model is just a textbox (or something as simple), I just delegate to the base method. However, when I'm working with a dropdown or something more complex (the UI dictates that date and time are separate data entry fields but for the model it is one Property), I have to perform some checks and some manual data munging. The end result of this is that I have some pretty tight ties between the View and Binder. I'm architecturally fine with this but from a code maintenance standpoint, it's a nightmare. For example, my model I'm binding here is of type Log (this is the object I will get as a parameter on my Action). The "ServiceStateTime" is a property on Log. The form values of "log.ServiceStartDate" and "log.ServiceStartTime" are totally arbitrary and come from two textboxes on the form (Html.TextBox("log.ServiceStartTime",...)) protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { if (propertyDescriptor.Name == "ServiceStartTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceStartDate").ConvertTo(typeof (string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceStartTime").ConvertTo(typeof (string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } if (propertyDescriptor.Name == "ServiceEndTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceEndDate").ConvertTo(typeof(string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceEndTime").ConvertTo(typeof(string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } The Log.ServiceEndTime is a similar field. This doesn't feel very DRY to me. First, if I refactor the ServiceStartTime or ServiceEndTime into different field names, the text strings may get missed (although my refactoring tool of choice, R#, is pretty good at this sort of thing, it wouldn't cause a build-time failure and would only get caught by manual testing). Second, if I decided to arbitrarily change the descriptors "log.ServiceStartDate" and "log.ServiceStartTime", I would run into the same problem. To me, runtime silent errors are the worst kind of error out there. So, I see a couple of options to help here and would love to get some input from people who have come across some of these issues: Refactor any text strings in common between the view and model binders out into const strings attached to the ViewModel object I pass from controller to the aspx/ascx view. This pollutes the ViewModel object, though. Provide unit tests around all of the interactions. I'm a big proponent of unit tests and haven't started fleshing this option out but I've got a gut feeling that it won't save me from foot-shootings. If it matters, the Log and other entities in the system are persisted to the database using Fluent NHibernate. I really want to keep my controllers as thin as possible. So, any suggestions here are greatly welcomed! Thanks

    Read the article

  • Inheriting XML files and modifying values

    - by Veehmot
    This is a question about concept. I have an XML file, let's call it base: <base id="default"> <tags> <tag>tag_one</tag> <tag>tag_two</tag> <tag>tag_three</tag> </tags> <data> <data_a>blue</data_a> <data_b>3</data_b> </data> </base> What I want to do is to be able to extend this XML in another file, modifying individual properties. For example, I want to inherit that file and make a new one with a different data/data_a node: <base id="green" import="default"> <data> <data_a>green</data_a> </data> </base> So far it's pretty simple, it replaces the old data/data_a with the new one. I even can add a new node: <base id="ext" import="default"> <moredata> <data>extended version</data> </moredata> </base> And still it's pretty simple. The problem comes when I want to delete a node or deal with XML Lists (like the tags node). How should I reference a particular index on a list? I was thinking doing something like: <base id="diffList" import="default"> <tags> <tag index="1">this is not anymore tag_two</tag> </tags> </base> And for deleting a node / array index: <base id="deleting" import="default"> <tags> <tag index="2"/> </tags> <data/> </base> <!-- This will result in an XML containing these values: --> <base> <tag>tag_one</tag> <tag>tag_two</tag> </base> But I'm not happy with my solutions. I don't know anything about XSLT or other XML transformation tools, but I think someone must have done this before. The key goal I'm looking for is ease to write the XML by hand (both the base and the "extended"). I'm open to new solutions besides XML, if they are easy to write manually. Thanks for reading.

    Read the article

  • I'm trying to make lots of the same object appear randomly on the screen subject to conditions and k

    - by Katsideswide
    Hi! A good friend recommended this site to me, it looks really useful! I'm a bit of a shameless noob at actionscript and after 3 days of tutorials and advice I've hit a brick wall. I've managed to get a sensor attached to an arduino talking to flash using something called AS3glue. it works, when i set up a trace("leaf") for the contition that the sensor reads 0, i get a printout of the word "leaf". however i want the program to make a graphic appear on the screen when this condition is met, not just trace something. I'm trying to get the program to generate a library object called "Enemy" on the screen at a random position each time the conditions are met. It's called enemy because I was following a game tutorial...actually it's a drawing of a leaf. Here's the bit of the code which is causing me problems: var army:Array; var enemy:Enemy; function AvoiderGame() { army = new Array(); var newEnemy = new Enemy( 100, 100 ); army.push( newEnemy ); addChild( newEnemy ); } function timerEvent(event:Event):void { if (a.getAnalogData(0) ==0 && a.getAnalogData(0) != this.lastposition){ trace("leaf"); var randomX:Number = (Math.random() * 200) + 100; var randomY:Number = (Math.random() * 150) + 50; var newEnemy = new Enemy( randomX, randomY); army.push( newEnemy ); addChild( newEnemy ); } else if (a.getAnalogData(0) == 0) { //don't trace anything } else { //don't trace anything } this.lastposition = a.getAnalogData(0); //afterwards, set the position to be the new lastposition and repeat. } I've imported "import flash.display.MovieClip;" and the code for the Enemy class looks like this: package { import flash.display.MovieClip; public class Enemy extends MovieClip { public function Enemy( startX:Number, startY:Number ) { x = startX; y = startY; } } } Here's my error. I've tried googling, it seems like a pretty general error: TypeError: Error #1009: Cannot access a property or method of a null object reference. at as3glue_program_fla::MainTimeline/timerEvent() at flash.utils::Timer/_timerDispatch() at flash.utils::Timer/tick() I've made sure that the "Enemy" object is exported for AS3. I'm going for something like this when it's programmed in AS2: leafCounter = 0; //set the counter to 0 counter.swapDepths(1000); //puts the counter on top of pretty much anything, unless you make more than 1000 leaves! counter.textbox.text = 0; //shows "0" in the text box in the "counter" movie clip this.onMouseDown = function() { //triggers when the mouse is clicked this.attachMovie("Leaf","Leaf"+leafCounter,leafCounter,{_x:Math.random()*Stage.width,_y:Math.random()*Stage.height,_rotation:Math.random()*360}); //adds a leaf to rthe stage with a random position and random rotation leafCounter++; //adds 1 to the leaf counter counter.textbox.text = leafCounter; //shows that number in the text box } I'm sure it must be a simple error, I can get the logic working when it just traces something on the screen but i can't get it to generate an "enemy" Any help or hints would be really useful! I know this is a bit of a ham-fisted job of altering existing code.

    Read the article

  • Opinions on Copy Protection / Software Licensing via phoning home?

    - by Jakobud
    I'm developing some software that I'm going to eventually sell. I've been thinking about different copy protection mechanisms, both custom and 3rd party. I know that no copy protection is 100% full-proof, but I need to at least try. So I'm looking for some opinions to my approach I'm thinking about: One method I'm thinking about is just having my software connect to a remote server when it starts up, in order to verify the license based off the MAC address of the ethernet port. I'm not sure if the server would be running a MySQL database that retrieves the license information, or what... Is there a more simple way? Maybe some type of encrypted file that is read? I would make the software still work if it can't connect to the server. I don't want to lock someone out just because they don't have internet access at that moment in time. In case you are wondering, the software I'm developing is extremely internet/network dependant. So its actually quite unlikely that the user wouldn't have internet access when using it. Actually, its pretty useless without internet/network access. Anyone know what I would do about computers that have multiple MAC addresses? A lot of motherboards these days have 2 ethernet ports. And most laptops have 1 ethernet, 1 wifi and Bluetooth MAC addresses. I suppose I could just pick a MAC port and run with it. Not sure if it really matters A smarty and tricky user could determine the server that the software is connecting to and perhaps add it to their host file so that it always trys to connect to localhost. How likely do you think this is? And do you think its possible for the software to check if this is being done? I guess parsing of the host file could always work. Look for your server address in there and see if its connecting to localhost or something. I've considered dongles, but I'm trying to avoid them just because I know they are a pain to work with. Keeping them updated and possibly requiring the customer to run their own license server is a bit too much for me. I've experienced that and it's a bit of a pain that I wouldn't want to put my customers through. Also I'm trying to avoid that extra overhead cost of using 3rd party dongles. Also, I'm leaning toward connecting to a remote server to verify authentication as opposed to just sending the user some sort of license file because what happens when the user buys a new computer? I have to send them a replacement license file that will work with their new computer, but they will still be able to use it on their old computer as well. There is no way for me to 'de-authorize' their old computer without asking them to run some program on it or something. Also, one important note, with the software I would make it very clear to the user in the EULA that the software connects to a remote server to verify licensing and that no personal information is sent. I know I don't care much for software that does that kinda stuff without me knowing. Anyways, just looking for some opinions for people who have maybe gone down this kinda road. It seems like remote-server-dependent-software would be one of the most effective copy-protection mechanisms, not just because of difficulty of circumventing, but also could be pretty easy to manage the licenses on the developers end.

    Read the article

  • Developer Training – Various Options for Maximum Benefit – Part 4

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series, by now you are aware of all the pros and cons that can come along with training.  We’ve asked and answered hard questions, and investigated them “whys” and “hows” of training.  Now it is time to talk about all the different kinds of training that are out there! On Job Training The most common type of training is on the job training.  Everyone receives this kind of education – even experts who come in to consult have to be taught where the printer, pens, and copy machines are.  If you are thinking about more concrete topics, though, on the job training can be some of the easiest to come across.  Picture this: someone in the company whom you really admire is hard at work on a project.  You come up to them and ask to help them out – if they are a busy developer, the odds are that they will say “yes, please!”   If you phrase your question as an offer of help, you can receive training without ever putting someone in the awkward position of acting as a mentor.  However, some people may want the task of being a mentor.  It can never hurt to ask.  Most people will be more than willing to pass their knowledge along. Extreme Programming If your company and coworkers are willing, you can even investigate Extreme Programming.  This is a type of programming that allows small teams to quickly develop code and products that are released with almost immediate user feedback.  You can find more information at http://www.extremeprogramming.org/.  If this is something your company could use, suggest it to your supervisor.  Even if they say no, it will make it clear that you are a go-getter who is interested in new and exciting projects.  If the answer is yes, then you have the opportunity to get some of the best on the job training around. In Person Training Click on Image to Enlarge When you say the word “training,” most people’s minds go back to the classroom, an image they are familiar with.  While training doesn’t always have to be in a traditional setting, because it is so familiar it can also be the most valuable type of training.  There are many ways to get training through a live instructor.  Some companies may be willing to send a representative to you, where employees will get training, sometimes food and coffee, and a live instructor who can answer questions immediately.  Sometimes these trainers are also able to do consultations at the same time, which can invaluable to a company.  If you are the one to asks your supervisor for a training session that can also be turned into a consultation, you may stick in their minds as an incredibly dedicated employee.  If you can’t find a representative, local colleges can also be a good resource for free or cheap classes – or they may have representatives coming who are willing to take on a few more students. Benefits of On Demand Developer Training Of course, you can often get the best of all these types of training with online or On Demand training.  You can get the benefit of a live instructor who is willing to answer questions (although in this case, usually through e-mail or other online venues), there are often real-world examples to follow along – like on the job training – and best of all you can learn whenever you have the time or need.  Did a problem with your server come up at midnight when all your supervisors are safe at home and probably in bed?  No problem!  On Demand training is especially useful if you need to slow down, pause, or rewind a training session.  Not even a real-life instructor can do that! When I was writing this blog post, I felt that each of the subject, which I have covered can be blog posts of itself. However, I wanted to keep the the blog post concise and so touch based on three major training aspects 1) On Job Training 2) In Person Training and 3) Online training. Here is the question for you – is there any other kind of training methods available, which are effective and one should consider it? If yes, what are those, I may write a follow up blog post on the same subject next week. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Stand-Up Desk 2012 Update

    - by BuckWoody
    One of the more popular topics here on my technical blog doesn't have to do with technology, per-se - it's about the choice I made to go to a stand-up desk work environment. If you're interested in the history of those, check here: Stand-Up Desk Part One Stand-Up Desk Part Two I have made some changes and I was asked to post those here.Yes, I'm still standing - I think the experiment has worked well, so I'm continuing to work this way. I've become so used to it that I notice when I sit for a long time. If I'm flying, or driving a long way, or have long meetings, I take breaks to stand up and move around. That being said, I don't stand as much as I did. I started out by standing the entire day - which did not end well. As you can read in my second post, I found that sitting down for a few minutes each hour worked out much better. And over time I would say that I now stand about 70-80% of the day, depending on the day. Some days I don't even notice I'm standing, so I don't sit as often. Other days I find that I really tire quickly - so I sit more often. But in both cases, I stand more than I sit. In the first post you can read about how I used a simple coffee-table from Ikea to elevate my desktop to the right height. I then adjusted the height where I stand by using a small plastic square and some carpet. Over time I found this did not work as well as I'd like. The primary reason is that the front of these are at the same depth - so my knees would hit the desk or table when I sat down. Also, the desk was at a certain height, and I had to adjust, rather than the other way around.  Also, I like a lot of surface area on top of a desk - almost more of a table. Routing cables and wiring was a pain, and of course moving it was out of the question.   So I've changed what I use. I found a perfect solution for what I was looking for - industrial wire shelving: I bought one, built only half of it (for the right height I wanted) and arranged the shelves the way I wanted. I then got a 5'x4' piece of wood from Lowes, and mounted it to where the top was balanced, but had an over-hang  I could get my knees under easily.My wife sewed a piece of fake-leather for the top. This arrangement provides the following benefits: Very strong Rolls easily, wheels can lock to prevent rolling Long, wide shelves Wire-frame allows me to route any kind of wiring and other things all over the desk I plugged in my UPS and ran it's longer power-cable to the wall outlet. I then ran the router's LAN connection along that wire, and covered both with a large insulation sleeve. I then plugged in everything to the UPS, and routed all the wiring. I can now roll the desk almost anywhere in the room so that I can record, look out the window, get closer to or farther away from the door and more. I put a few boxes on the shelves as "drawers" and tidied that part up. Even my printer fits on a shelf. Laser-dog not included - some assembly required In the second post you can read about the bar-stool I purchased from Target for the desk. I cheaped-out on this one, and it proved to be a bad choice. Because I had to raise it so high, and was constantly sitting on it and then standing up, the gas-cylinder in it just gave out. So it became a very short stool that I ended up getting rid of. In the end, this one from Ikea proved to be a better choice: And so this arrangement is working out perfectly. I'm finding myself VERY productive this way. I hope these posts help you if you decide to try working at a stand-up desk. Although I was skeptical at first, I've found it to be a very healthy, easy way to code, design and especially present over a web-cam. It's natural to stand to speak when you're presenting, and it feels more energetic than sitting down to talk to others.

    Read the article

  • Adding a Network Loopback Adapter to Windows 8

    - by Greg Low
    I have to say that I continue to be frustrated with finding out how to do things in Windows 8. Here's another one and it's recorded so it might help someone else. I've also documented what I tried so that if anyone from the product group ever reads this, they'll understand how I searched for it and might try to make it easier.I wanted to add a network loopback adapter, to have a fixed IP address to work with when using an "internal" network with Hyper-V. (The fact that I even need to do this is also painful. I don't know why Hyper-V can't make it easy to work with host system folders, etc. as easily as I can with VirtualPC, VirtualBox, etc. but that's a topic for another day).In the end, what I needed was a known IP address on the same network that my guest OS was using, via the internal network (which allows connectivity from the host OS to/from guest OS's).I started by looking in the network adapters areas but there is no "add" functionality there. Realising that this was likely to be another unexpected challenge, I resorted to searching for info on doing this. I found KB article 2777200 entitled "Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012". Aha, I thought that's what I'd need. It describes the symptom as "You are trying to install the Microsoft Loopback Adapter, but are unable to find it." and that certainly sounded like me. There's a certain irony in documenting that something's hard to find instead of making it easier to find. Anyway, you'd hope that in that article, they'd then provide a step by step example of how to do it, but what they supply is this: The Microsoft Loopback Adapter was renamed in Windows 8 and Windows Server 2012. The new name is "Microsoft KM-TEST Loopback Adapter". When using the Add Hardware Wizard to manually add a network adapter, choose Manufacturer "Microsoft" and choose network adapter "Microsoft KM-TEST Loopback Adapter".The trick with this of course is finding the "Add Hardware Wizard". In Control Panel -> Hardware and Sound, there are options to "Add a device" and for "Device Manager". I tried the "Add a device" wizard (seemed logical to me) but after that wizard tries it's best, it just tells you that there isn't any hardware that it thinks it needs to install. It offers a link for when you can't find what you're looking for, but that leads to a generic help page that tells you how to do things like turning on your printer.In Device Manager, I checked the options in the program menus, and nothing useful was present. I even tried right-clicking "Network adapters", hoping that would lead to an option to add one, also to no avail.So back to the search engine I went, to try to find out where the "Add Hardware Wizard" is. Turns out I was in the right place in Device Manager, but I needed to right-click the computer's name, and choose "Add Legacy Hardware". No doubt that hasn't changed location lately but it's a while since I needed to add one so I'd forgotten. Regardless, I'm left wondering why it couldn't be in the menu as well.Anyway, for a step by step list, you need to do the following:1. From Control Panel, select "Device Manager" under the "Devices and Printers" section of the "Hardware and Sound" tab.2. Right-click the name of the computer at the top of the tree, and choose "Add Legacy Hardware".3. In the "Welcome to the Add Hardware Wizard" window, click Next.4. In the "The wizard can help you install other hardware" window, choose "Install the hardware that I manually select from a list" option and click Next.5. In the "The wizard did not find any new hardware on your computer" window, click Next.6. In the "From the list below, select the type of hardware you are installing" window, select "Network Adapters" from the list, and click Next.7. In the "Select Network Adapter" window, from the Manufacturer list, choose Microsoft, then in the Network Adapter window, choose "Microsoft KM-TEST Loopback Adapter", then click Next.8. In the "The wizard is ready to install your hardware" window, click Next.9. In the "Completing the Add Hardware Wizard" window, click Finish.Then you need to continue to set the IP address, etc.10. Back in Control Panel, select the "Network and Internet" tab, click "View Network Status and Tasks".11. In the "View your basic network information and set up connections" window, click "Change adapter settings".12. Right-click the new adapter that has been added (find it in the list by checking the device name of "Microsoft KM-TEST Loopback Adapter"), and click Properties.   

    Read the article

  • Control HelpButton, HelpRequested, HelpButtonClicked - Instant help for windows Dialog Form components

    Instant help for windows dialog components is a great feature and very much known since windows 98. but I saw many many people are not aware it and query on Google to get help, but “help button” for dialogs helps you or your customers to get the help instantly. Every dialog window has help icon if that dialog was coded to enable it. it really helps to know the functionality of the components quickly. For example I was trying to pint a document from acrobat reader and opened printer properties to print the content front and back of the paper. If you observe there is a help button before close button. To get help on options of “Print on Both Sides” you would need to click on help button first and then click on the area on which you want to see the help. above picture shows help text for the options of “Print on Both Sides”. If you would like to get the help using keyboard you can use F1 key. Help button can be displayed only if minimize button and maximize button both are not shown unless you want go with custom buttons. below is the way if you want to get Help button for windows forms.   In this sample demo I want to have a checkbox and need to show help when I click on F1 on check box. So I created a form which country check box and help label as show in adjacent picture. Below is the code for your code bind file. using System; using System.Windows.Forms; namespace WindowsFormsApplication1 {     public partial classForm1: Form    {         publicForm1()         {             InitializeComponent();         }         private void Form1_Load(objectsender, EventArgs e)         {             this.Text = "Help Button Demo Form";             lblHelp.Text = "Press F1 on any component to get Instant Help";             this.HelpButton = true;             this.MaximizeBox = false;             this.MinimizeBox = false;             chkCountry.Tag = "Check or Uncheck Coutry Check Box";             chkCountry.HelpRequested += newHelpEventHandler(chkCountry_HelpRequested);             chkCountry.MouseLeave += newEventHandler(chkCountry_MouseLeave);         }         void chkCountry_HelpRequested(objectsender, HelpEventArgs hlpevent)         {             ControlrequestingControl = (Control)sender;             lblHelp.Text = (string)requestingControl.Tag;             hlpevent.Handled = true;         }         void chkCountry_MouseLeave(objectsender, EventArgs e)         {             lblHelp.Text = "Press F1 on any component to get Instant Help";         }     } } In above code  “HelpRequested” is an event will be fired when you click on F1 on Country checkbox. I stored the help information in the checkbox property called “Tag”. You might also maintain a property file to keep help text for each component differently. If you click on F1 when focus is on main form instead on individual component then generally separate help window opens. This can be done using the event “Form.HelpRequested” to open help windows as in below code. this.HelpRequested += newHelpEventHandler(Form1_HelpRequested); voidForm1_HelpRequested(objectsender, HelpEventArgs hlpevent) {     frmHelp.Show(); } span.fullpost {display:none;}

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Windows Phone 7 development: reading RSS feeds

    - by DigiMortal
    One limitation on Windows Phone 7 is related to System.Net namespace classes. There is no convenient way to read data from web. There is no WebClient class. There is no GetResponse() method – we have to do it all asynchronously because compact framework has limited set of classes we can use in our applications to communicate with internet. In this posting I will show you how to read RSS-feeds on Windows Phone 7. NB! This is my draft code and it may contain some design flaws and some questionable solutions. This code is intended to use as test-drive for Windows Phone 7 CTP developer tools and I don’t suppose you are going to use this code in production environment. Current state of my RSS-reader Currently my RSS-reader for Windows Phone 7 is very simple, primitive and uses almost all defaults that come out-of-box with Windows Phone 7 CTP developer tools. My first goal before going on with nicer user interface design was making RSS-reading work because instead of convenient classes from .NET Framework we have to use very limited classes from .NET Framework CE. This is why I took the reading of RSS-feeds as my first task. There are currently more things to solve regarding user-interface. As I am pretty new to all this Silverlight stuff I am not very sure if I can modify default controls easily or should I write my own controls that have better look and that work faster. The image on right shows you how my RSS-reader looks like right now. Upper side of screen is filled with list that shows headlines from this blog. The bottom part of screen is used to show description of selected posting. You can click on the image to see it in original size. In my next posting I will show you some improvements of my RSS-reader user interface that make it look nicer. But currently it is nice enough to make sure that RSS-feeds are read correctly. FeedItem class As this is most straight-forward part of the following code I will show you RSS-feed items class first. I think we have to stop on it because it is simple one. public class FeedItem {     public string Title { get; set; }     public string Description { get; set; }     public DateTime PublishDate { get; set; }     public List<string> Categories { get; set; }     public string Link { get; set; }       public FeedItem()     {         Categories = new List<string>();     } } RssClient RssClient takes feed URL and when asked it loads all items from feed and gives them back to caller through ItemsReceived event. Why it works this way? Because we can make responses only using asynchronous methods. I will show you in next section how to use this class. Although the code here is not very good but it works like expected. I will refactor this code later because it needs some more efforts and investigating. But let’s hope I find excellent solution. :) public class RssClient {     private readonly string _rssUrl;       public delegate void ItemsReceivedDelegate(RssClient client, IList<FeedItem> items);     public event ItemsReceivedDelegate ItemsReceived;       public RssClient(string rssUrl)     {         _rssUrl = rssUrl;     }       public void LoadItems()     {         var request = (HttpWebRequest)WebRequest.Create(_rssUrl);         var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);     }       void ResponseCallback(IAsyncResult result)     {         var request = (HttpWebRequest)result.AsyncState;         var response = request.EndGetResponse(result);           var stream = response.GetResponseStream();         var reader = XmlReader.Create(stream);         var items = new List<FeedItem>(50);           FeedItem item = null;         var pointerMoved = false;           while (!reader.EOF)         {             if (pointerMoved)             {                 pointerMoved = false;             }             else             {                 if (!reader.Read())                     break;             }               var nodeName = reader.Name;             var nodeType = reader.NodeType;               if (nodeName == "item")             {                 if (nodeType == XmlNodeType.Element)                     item = new FeedItem();                 else if (nodeType == XmlNodeType.EndElement)                     if (item != null)                     {                         items.Add(item);                         item = null;                     }                   continue;             }               if (nodeType != XmlNodeType.Element)                 continue;               if (item == null)                 continue;               reader.MoveToContent();             var nodeValue = reader.ReadElementContentAsString();             // we just moved internal pointer             pointerMoved = true;               if (nodeName == "title")                 item.Title = nodeValue;             else if (nodeName == "description")                 item.Description =  Regex.Replace(nodeValue,@"<(.|\n)*?>",string.Empty);             else if (nodeName == "feedburner:origLink")                 item.Link = nodeValue;             else if (nodeName == "pubDate")             {                 if (!string.IsNullOrEmpty(nodeValue))                     item.PublishDate = DateTime.Parse(nodeValue);             }             else if (nodeName == "category")                 item.Categories.Add(nodeValue);         }           if (ItemsReceived != null)             ItemsReceived(this, items);     } } This method is pretty long but it works. Now let’s try to use it in Windows Phone 7 application. Using RssClient And this is the fragment of code behing the main page of my application start screen. You can see how RssClient is initialized and how items are bound to list that shows them. public MainPage() {     InitializeComponent();       SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape;     listBox1.Width = Width;       var rssClient = new RssClient("http://feedproxy.google.com/gunnarpeipman");     rssClient.ItemsReceived += new RssClient.ItemsReceivedDelegate(rssClient_ItemsReceived);     rssClient.LoadItems(); }   void rssClient_ItemsReceived(RssClient client, IList<FeedItem> items) {     Dispatcher.BeginInvoke(delegate()     {         listBox1.ItemsSource = items;     });            } Conclusion As you can see it was not very hard task to read RSS-feed and populate list with feed entries. Although we are not able to use more powerful classes that are part of full version on .NET Framework we can still live with limited set of classes that .NET Framework CE provides.

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >