Search Results

Search found 14958 results on 599 pages for 'non technical'.

Page 539/599 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • What would you suggest as a high school first language?

    - by ldigas
    Edit by OA: After reading some answers I'll just update the question a little. At first I put it a little bluntly, but some of those gave me some good arguments which have to be taken into consideration while making a stand on this one. (these are mostly picked up from comments and answers below). A few things to take into account: to many pupils this is a first programming language - at this stage most of them have trouble grasping a difference between data types, variable passing, ... and whatnot, less alone pointers and similar 'low level stuff' :) they will all have to pass this to get into next grade (well, big majority of them anyway) not all of them have computers at home, not all of them are willing to learn this, less alone interested in - so the concepts have to be taught on a finite time scale in school hours (as well as practice on computers) free literature is a bonus - the teacher will make some scripts and handaways, but still ... I wouldn't like to bear the parents with the burden of buying expensive literature (also, english is not a native language here ... and although they are all learning it, their ability to read it fluently is somewhat questionable) somebody gave an argument - "a language which does not get in the way of ideas" - good one accessibility on different platforms in not expecially important at this point - although most of the suggested ones are available on win as well as linux - not many macs in this part of europe (their prices are sky high for anything but specialised usage) I will check what are the licencing issues on ms express editions about using it massively in high schools for purposes like this - if someone has any info about this, please, do not be shy with it :) A friend of mine, informatics teacher - in EU it comes as something as junior cs teacher, in a local high school asked me what I thought about what should be the first language pupils should be taught? It is a technical school (a little more oriented towards mathematics than the gymnasium, but not computer oriented totally). So I'm asking you - what do you think should be the first language pupils are exposed to in highschool? They have been teaching Pascal so far, but she's not sure that's a good course. She thought about switching to C (which I resented; considering not all pupils have interests in programming, to start with, and should be taught something higher level since they are just gripping the idea of a loop and such ... for a start), I suggested python or ruby (preferably py since it handles all paradigms). What is your opinion on this one? I looked, but didn't find a similar question on SO, so if there is one, please just point me towards it. Edit: The assumption is that none of the pupils have been exposed to any programming in junior school. See also: What is the best way to teach young kids some basic programming concepts? Best ways to teach a beginner to program How and when do you teach a kid to code What is the easiest language to start with? High School Programming

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • php pdo connection scope

    - by Scarface
    Hey guys I have a connection class I found for pdo. I am calling the connection method on the page that the file is included on. The problem is that within functions the $conn variable is not defined even though I stated the method was public (bare with me I am very new to OOP), and I was wondering if anyone had an elegant solution other then using global in every function. Any suggestions are greatly appreciated. CONNECTION class PDOConnectionFactory{ // receives the connection public $con = null; // swich database? public $dbType = "mysql"; // connection parameters // when it will not be necessary leaves blank only with the double quotations marks "" public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; // arrow the persistence of the connection public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ // it carries through the connection $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } } PAGE USED ON include("includes/connection.php"); $db = new PDOConnectionFactory(); $conn = $db->getConnection(); function test(){ try{ $sql = 'SELECT * FROM topic'; $stmt = $conn->prepare($sql); $result=$stmt->execute(); } catch(PDOException $e){ echo $e->getMessage(); } } test();

    Read the article

  • Objective-C Basic class related question, retaining the value of a specific object using a class fil

    - by von steiner
    Members, scholars, code gurus. My background is far from any computer programming thus my question may seems basic and somewhat trivial to you. Nevertheless it seems that I can't put my head around it. I have googled and searched for the answer, just to get myself confused even more. With that, I would kindly ask for a simple explanation suitable for a non technical person such as myself and for other alike arriving to this thread. I have left a comment with the text "Here is the issue" below, referring to my question. // character.h #import <Foundation/Foundation.h> @interface character : NSObject { NSString *name; int hitPoints; int armorClass; } @property (nonatomic,retain) NSString *name; @property int hitPoints,armorClass; -(void)giveCharacterInfo; @end // character.m #import "character.h" @implementation character @synthesize name,hitPoints,armorClass; -(void)giveCharacterInfo{ NSLog(@"name:%@ HP:%i AC:%i",name,hitPoints,armorClass); } @end // ClassAtLastViewController.h #import <UIKit/UIKit.h> @interface ClassAtLastViewController : UIViewController { } -(void)callAgain; @end // ClassAtLastViewController.m #import "ClassAtLastViewController.h" #import "character.h" @implementation ClassAtLastViewController - (void)viewDidLoad { //[super viewDidLoad]; character *player = [[character alloc]init]; player.name = @"Minsc"; player.hitPoints = 140; player.armorClass = 10; [player giveCharacterInfo]; [player release]; // Up until here, All peachy! [self performSelector:@selector(callAgain) withObject:nil afterDelay:2.0]; } -(void)callAgain{ // Here is the issue, I assume that since I init the player again I loss everything // Q1. I loss all the data I set above, where is it than? // Q2. What is the proper way to implement this character *player = [[character alloc]init]; [player giveCharacterInfo]; } Many thanks in advance, Kindly remember that my background is more related to Salmons breeding than to computer code, try and lower your answer to my level if it's all the same to you.

    Read the article

  • Mixing .NET versions between website and virtual directories and the "server application unavailable" error Message

    - by Doug Chamberlain
    Backstory Last month our development team created a new asp.net 3.5 application to place out on our production website. Once we had the work completed, we requested from the group that manages are server to copy the app out to our production site, and configure the virtual directory as a new application. On 12/27/2010, two public 'Gineau Pigs' were selected to use the app, and it worked great. On 12/30/2010, We received notification by internal staff, that when that staff member tried to access the application (this was the Business Process Owner) they recieved the 'Server Application Unavailable' message. When I called the group that does our server support, I was told that it probably failed, because I didn't close the connections in my code. However, the same group went in and then created a separate app pool for this Extension Request application. It has had no issues since. I did a little googling, since I do not like being blamed for things. I found that the 'Server Application Unavailable' message will also appear when you have multiple applications using different frameworks and you do not put them in different application pools. Technical Details - Tree of our website structure Main Website <-- ASP Classic +-Virtual Directory(ExtensionRequest) <-- ASP 3.5 From our server support group: 'Reviewed server logs and website setup in IIS. Had to reset the application pool as it was not working properly. This corrected the website and it is now back online. We went ahead and created a application pool for the extension web so it is isolated from the main site pool. In the past we have seen other application do this when there is a connection being left open and the pool fills up. Would recommend reviewing site code to make sure no connections are being left open.' The Real Question: What really caused the failure? Isn't the connection being left open issue an ASP Classic issue? Wouldn't the ExtensionRequest application have to be used (more than twice) in the first place to have the connections left open? Is it more likely the failure is caused by them not bothering to setup the new Application in it's own App Pool in the first place? Sorry for the long windedness

    Read the article

  • Recommended ASP.NET Shared Hosting

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta support. Not a big deal to me.

    Read the article

  • How to dynamically change the content of a facet in a custom component?

    - by romaintaz
    Hello, Let's consider that I want to extend an existing JSF component, such as the <rich:datatable/>. My main requirement is to dynamically modify the content of a <f:facet>, to change its content. What is the best way to achieve that? Or where is the best place in the code to achieve that? In my faces-config.xml, I have the following declaration: <faces-config> ... <component> <component-type>my.component.dataTable</component-type> <component-class>my.project.component.table.MyHtmlDataTable</component-class> </component> ... <render-kit> <render-kit-id>HTML_BASIC</render-kit-id> <renderer> <component-family>org.richfaces.DataTable</component-family> <renderer-type>my.renderkit.dataTable</renderer-type> <renderer-class>my.project.component.table.MyDataTableRenderer</renderer-class> </renderer> ... Also, my my-project.taglib.xml file (as I use Facelets) looks like: <facelet-taglib> <namespace>http://my.project/jsf</namespace> <tag> <tag-name>dataTable</tag-name> <component> <component-type>my.component.dataTable</component-type> <renderer-type>my.renderkit.dataTable</renderer-type> </component> </tag> So as you can see, I have two classes in my project for my custom datatable: MyHtmlDataTable and MyDataTableRenderer. One of my idea is to modify the content of the <f:facet> directly in the doEncodeBegin() method of my renderer. This is working (in fact almost working), but I don't really think that's the better place to achieve my modification. What do you think? Technical information: JSF 1.2, Facelets, Richfaces 3.3.2, Java 1.6

    Read the article

  • Planning to create PDF files in Ruby on Rails

    - by deau
    Hi there, A Ruby on Rails app will have access to a number of images and fonts. The images are components of a visual layout which will be stored separately as a set of rules. The rules specify document dimensions along with which images are used and where. The app needs to take these rules, fetch the images, and generate a PDF that is ready for local printing or emailing. The fonts will also be important. The user needs to customize the layout by inputting text which will be included in the PDF. The PDF must therefore also contain the desired font so that the document renders identically across different machines. Each PDF may have many pages. Each page may have different dimensions but this is not essential. Either way, the ability to manipulate the dimensions and margins given by the PDF is essential. The only thing that needs to be regularly changed is the text. If this is takes too much development then the app can store the layouts in 3rd party PDFs and edit the textual content directly. Eventually though, this will prove too restrictive on the apps intended functionality so I would prefer the app to generate the PDF's itself. I have never worked with PDFs before and, for the most part, I've never had to output anything to the user outside their monitor. A printed medium could require a very different approach to get the best results. If anyone has any advice on how to model the PDF format this it would be really appreciated. The technical aspects of printing such as bleed, resolution and colour have already been factored in to the layouts and images. I am aware that PDF is a proprietary file format and I want to use free or open source software. I have seen a number of Ruby libraries for generating PDF files but because I am new on this scene I have no way to reliably compare them and too little time to implement and test them all. I also have the option of using C to handle this feature and if this is process intensive then that might be preferred. What should I be thinking about and how should I be planning to implement this?

    Read the article

  • pdo connection scope

    - by Scarface
    Hey guys I have a connection class I found for pdo. I am calling the connection method on the page that the file is included on. The problem is that within functions the $conn variable is not defined even though I stated the method was public (bare with me I am very new to OOP), and I was wondering if anyone had an elegant solution other then using global in every function. Any suggestions are greatly appreciated. CONNECTION class PDOConnectionFactory{ // receives the connection public $con = null; // swich database? public $dbType = "mysql"; // connection parameters // when it will not be necessary leaves blank only with the double quotations marks "" public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; // arrow the persistence of the connection public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ // it carries through the connection $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } } PAGE USED ON include("includes/connection.php"); $db = new PDOConnectionFactory(); $conn = $db->getConnection(); function test(){ try{ $sql = 'SELECT * FROM topic'; $stmt = $conn->prepare($sql); $result=$stmt->execute(); } catch(PDOException $e){ echo $e->getMessage(); } } test();

    Read the article

  • ERROR: Linux route add command failed: external program exited with error status: 4

    - by JohnMerlino
    A remote machine running fedora uses openvpn, and multiple developers were successfully able to connect to it via their client openvpn. However, I am running Ubuntu 12.04 and I am having trouble connecting to the server via vpn. I copied ca.crt, home.key, and home.crt from the server to my local machine to /etc/openvpn folder. My client.conf file looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. ;proto tcp proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. remote xx.xxx.xx.130 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nogroup # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ca.crt cert home.crt key home.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 But when I start server and look in /var/log/syslog, I notice the following error: May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.252 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: ERROR: Linux route add command failed: external program exited with error status: 4 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 172.27.12.0 netmask 255.255.255.0 gw 10.27.12.37 May 27 22:13:51 myuser ovpn-client[5626]: /sbin/route add -net 10.27.12.1 netmask 255.255.255.255 gw 10.27.12.37 And I am unable to connect to the server via openvpn: $ ssh [email protected] ssh: connect to host xxx.xx.xx.130 port 22: No route to host What may I be doing wrong?

    Read the article

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

  • CodePlex Daily Summary for Monday, February 22, 2010

    CodePlex Daily Summary for Monday, February 22, 2010New ProjectsAVDB: System to keep track of orders and the inventory of televisions, DVDs, VCRs etcBooky: Booky is an online Bookmark Management Tool. Gear Up for Lord of the Rings Online (lotro): Windows utility for checking what your LOTRO character currently has equipped and figuring out gear you should get to improve your stats.GotSharp Extensions: GotSharp Extensions is a set of helpful classes and extension methods that can make your coding experience easier and cleaner. Halfwit: A minimalist WPF Twitter client.HOA Starter Kit: A community subdivision website starter kit. First draft.Lua For Irony: Project to define the Lua language using the Irony (http://irony.codeplex.com/) development kit. This work is based heavily on the work done for V...MimeCloud: Scalable .NET Digital Asset & Media Management: MimeCloud is a scalable digital asset library & media management toolset. Founded by Alex Norcliffe and Peter Miller Written by people who have b...Parallel Mandelbrot Set solver: Solving the Mandelbrot set using the Parallel class in .NET 4.0. Showing the resulting image in a WPF application. The solution file requires VS 2010.Pomogad - Pomodoro Windows Gadget: Você usa Pomodoro Technique? Não sabe o que é? Veja aqui http://www.pomodorotechnique.com Agora que você já sabe, que tal usar essa técnica? E p...PostCrap - flyweight .NET AOP post compiler: PostCrap is a flyweight attribute based aspect injection .NET post compiler It is written in C# and uses Mono.Cecil to modify assemblies and injec...Software + Service Reference Demo Kit: MS China Developer and Platform Evangelism team created an End-2-End demo for Software + Service. Yet Another SharePoint Tool: YEAST provides you with a simple to integrate approach to generating SharePoint solution packages as part of a Visual Studio project. Zen Coding Visual Studio Plugin: Zen Coding for Visual Studio is plugin for HTML and CSS hi-speed codingNew Releases.Net MSBuild Google Closure Compiler Task: .Net MSBuild Google Closure Compiler Task 1.1: - Corrected issue with regular expression source file and renamingdotNails: dotNails_0.5.9: NOTE - the latest source code has been moved to google code to take advantage of Mercurial source control - http://code.google.com/p/dotnails/sourc...EasyWFUnit: EasyWFUnit-2.2: Release 2.2 of EasyWFUnit, an extension library to support unit testing of Windows Workflow, includes a revised WinForm GUI Test Builder that utili...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite BETA2 (for .NET 4.0RC): Includes Fluent.dll (with .pdb and .xml) and test application compiled with .NET 4.0 RC.FolderSize: FolderSize.Win32.1.0.3.0: FolderSize.Win32.1.0.3.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Fusion Charts Free for SharePoint: 1.3: Fix release for issue #11833 : Feature Must Be Activated on Root of Web Application.GotSharp Extensions: 1.0: First release, containing only a few extension methods for the System.String and System.IO.Stream classes, and a Range utility class.Jeremy's Experimental Repository: FluentValidation with IoC Sample: Sample code for the blog post Using FluentValidation with an IoC containerMiniTwitter: 1.08: MiniTwitter 1.08 更新内容 修正 自動更新が CodePlex の変更で動いていなかった問題を修正 自動更新に失敗すると落ちるバグを修正 通知領域アイコン右クリックで表示されるメニューが消えないバグを修正 変更 ハッシュタグの抽出条件を変更 API のエンドポイ...MSTS Editors & Tools: Simis Editor v0.3: Simis Editor v0.3 Enabled Edit > Undo and Edit > Redo. Undoing/redoing back to last saved state is identified as saved (no prompt on exit, etc.)....Parallel Mandelbrot Set solver: Alpha 1: First releaseParallelTasks: ParallelTasks 2.0 beta1: ParallelTasks 2.0 is a total re-write of the original version. Featuring improved performance and stability and a more consistent API.Personal Expense Tracker: Personal Expense Tracker v0.1 beta: This is the first beta release. Please provide me with your feedback.PostCrap - flyweight .NET AOP post compiler: PostCrap 1.0 AOP source and binaries: PostCrap 1.0 source and binaries (the unit test project contains sample interceptor attributes for exception handling & logging)Protoforma | Tactica Adversa: Skilful 0.1.3.276: AlphaRawr: Rawr 2.3.10: - More improvements to the default filters - Further improvement on avoiding useless gem swaps from the Optimizer. - Normal/Heroic ICC items shou...Reusable Library: v1.0.2: A collection of reusable abstractions for enterprise application developer.Sem.Sync: 2010-02-21 - Synchronization Manager - Beta: This release is not tested very well, so you should use this version only to evaluate new features. - Changed way of handling source-ids in order ...Survey - web survey & form engine: Survey 1.1.0: Release Survey v. 1.1.0.0 Major changes: - layout & graphics completely overhauled - several technical changes & repairs (e.g. matrix question iss...Yet Another SharePoint Tool: Version 1: Version 1Zeta Resource Editor: Release 2010-02-21: New source code release.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSharpyjQuery Library for SharePoint Web ServicesSharePoint ContribInfoServicepatterns & practices – Enterprise LibraryPHPExcel

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • C#: System.Lazy&lt;T&gt; and the Singleton Design Pattern

    - by James Michael Hare
    So we've all coded a Singleton at one time or another.  It's a really simple pattern and can be a slightly more elegant alternative to global variables.  Make no mistake, Singletons can be abused and are often over-used -- but occasionally you find a Singleton is the most elegant solution. For those of you not familiar with a Singleton, the basic Design Pattern is that a Singleton class is one where there is only ever one instance of the class created.  This means that constructors must be private to avoid users creating their own instances, and a static property (or method in languages without properties) is defined that returns a single static instance. 1: public class Singleton 2: { 3: // the single instance is defined in a static field 4: private static readonly Singleton _instance = new Singleton(); 5:  6: // constructor private so users can't instantiate on their own 7: private Singleton() 8: { 9: } 10:  11: // read-only property that returns the static field 12: public static Singleton Instance 13: { 14: get 15: { 16: return _instance; 17: } 18: } 19: } This is the most basic singleton, notice the key features: Static readonly field that contains the one and only instance. Constructor is private so it can only be called by the class itself. Static property that returns the single instance. Looks like it satisfies, right?  There's just one (potential) problem.  C# gives you no guarantee of when the static field _instance will be created.  This is because the C# standard simply states that classes (which are marked in the IL as BeforeFieldInit) can have their static fields initialized any time before the field is accessed.  This means that they may be initialized on first use, they may be initialized at some other time before, you can't be sure when. So what if you want to guarantee your instance is truly lazy.  That is, that it is only created on first call to Instance?  Well, there's a few ways to do this.  First we'll show the old ways, and then talk about how .Net 4.0's new System.Lazy<T> type can help make the lazy-Singleton cleaner. Obviously, we could take on the lazy construction ourselves, but being that our Singleton may be accessed by many different threads, we'd need to lock it down. 1: public class LazySingleton1 2: { 3: // lock for thread-safety laziness 4: private static readonly object _mutex = new object(); 5:  6: // static field to hold single instance 7: private static LazySingleton1 _instance = null; 8:  9: // property that does some locking and then creates on first call 10: public static LazySingleton1 Instance 11: { 12: get 13: { 14: if (_instance == null) 15: { 16: lock (_mutex) 17: { 18: if (_instance == null) 19: { 20: _instance = new LazySingleton1(); 21: } 22: } 23: } 24:  25: return _instance; 26: } 27: } 28:  29: private LazySingleton1() 30: { 31: } 32: } This is a standard double-check algorithm so that you don't lock if the instance has already been created.  However, because it's possible two threads can go through the first if at the same time the first time back in, you need to check again after the lock is acquired to avoid creating two instances. Pretty straightforward, but ugly as all heck.  Well, you could also take advantage of the C# standard's BeforeFieldInit and define your class with a static constructor.  It need not have a body, just the presence of the static constructor will remove the BeforeFieldInit attribute on the class and guarantee that no fields are initialized until the first static field, property, or method is called.   1: public class LazySingleton2 2: { 3: // because of the static constructor, this won't get created until first use 4: private static readonly LazySingleton2 _instance = new LazySingleton2(); 5:  6: // Returns the singleton instance using lazy-instantiation 7: public static LazySingleton2 Instance 8: { 9: get { return _instance; } 10: } 11:  12: // private to prevent direct instantiation 13: private LazySingleton2() 14: { 15: } 16:  17: // removes BeforeFieldInit on class so static fields not 18: // initialized before they are used 19: static LazySingleton2() 20: { 21: } 22: } Now, while this works perfectly, I hate it.  Why?  Because it's relying on a non-obvious trick of the IL to guarantee laziness.  Just looking at this code, you'd have no idea that it's doing what it's doing.  Worse yet, you may decide that the empty static constructor serves no purpose and delete it (which removes your lazy guarantee).  Worse-worse yet, they may alter the rules around BeforeFieldInit in the future which could change this. So, what do I propose instead?  .Net 4.0 adds the System.Lazy type which guarantees thread-safe lazy-construction.  Using System.Lazy<T>, we get: 1: public class LazySingleton3 2: { 3: // static holder for instance, need to use lambda to construct since constructor private 4: private static readonly Lazy<LazySingleton3> _instance 5: = new Lazy<LazySingleton3>(() => new LazySingleton3()); 6:  7: // private to prevent direct instantiation. 8: private LazySingleton3() 9: { 10: } 11:  12: // accessor for instance 13: public static LazySingleton3 Instance 14: { 15: get 16: { 17: return _instance.Value; 18: } 19: } 20: } Note, you need your lambda to call the private constructor as Lazy's default constructor can only call public constructors of the type passed in (which we can't have by definition of a Singleton).  But, because the lambda is defined inside our type, it has access to the private members so it's perfect. Note how the Lazy<T> makes it obvious what you're doing (lazy construction), instead of relying on an IL generation side-effect.  This way, it's more maintainable.  Lazy<T> has many other uses as well, obviously, but I really love how elegant and readable it makes the lazy Singleton.

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • SQL SERVER – Developer Training Resources and Summary Roundup

    - by pinaldave
    It is always pleasure for any author when other renowned authors in the industry write about you. Earlier I wrote a five part blog series on Developer Training and I have received a phenomenal response to the series. I have received plenty of comments, questions and feedback. I thought it would be nice to sum up the whole series as well answer a few of the questions received. Quick Recap Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. There are companies that pay for 100% of the expenses and there are employees who opt for training on their own expense during their personal time. Training is often looked at as vacation by employee and employers and we need to change this mind-set. One of the ways is to report back the learning to your manager and implement newly learned knowledge in day-to-day work. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. The manager often feels pressure to accommodate every single employee for training even though his training budget is limited. It is indeed the responsibility of the developer to get maximum advantage from the training. Training immediately helps organizations but stays as a part of an employee’s knowledge forever. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. The common element between both the methods is “learning material”. Learning material can be of any format – videos, books, paper notes or just a plain black board. Instructor-led training is a very effective mode but not possible every single time. During the course of the developer’s career, one has to learn lots of new technology and it is almost impossible to have a quality trainer available on that subject at that time. Books are most effective and proven methods, however, it always helps if someone explains the concepts of the book with a demonstration. In recent times I have started to believe in online trainings which leads to a hybrid experience. Online trainings take the best part of the books and the best part of the instructor-led training and gives effective training in a matter of hours. Developer Training – A Conclusive Summary- Part 5 In this part, I shared what I was continuously thinking about developer training. There is no better teacher than oneself. There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. Once someone overcomes them, life is much easier. I believe that proper and appropriate high quality training can help to address the burning issues. Opinion of Friends I invited a few of my friends to express their opinion about developer training and here are their opinions. I am listing them here in the order of the blog post publishing date. Nakul Vachhrajani - Developer Trainings-Importance, Benefits, Tips and follow-up Nakul’s sums of many of the concepts which are complementary to my blog posts. Nakul addresses the burning question of developer training with different angles. I am personally very impressed by his following statement - “Being skilled does not mean having just a stack of certifications, but it also means having an understanding about the internals of the products that you are working on – and using that knowledge to improve the efficiency & productivity at the workplace in turn resulting in better products, better consulting abilities and a happier self.” Nakul also suggests the online training options of Pluralsight. Vinod Kumar - Training–a necessity or bonus Vinod Kumar comes up with excellent follow up on developer training. Vinod is known for his inspirational writing about SQL Server. Vinod starts with a story of a student who is extremely eager to learn the wisdom of life from a monk but the monk does not accept him as a disciple for a long time. The conversation between student and monk is indeed an essence of all learning. We all want to learn quickly and be successful but the most important thing in life is to have the right attitude towards learning and more so towards life. The blog post end with a very important thought about how to avoid the famous excuse – “I don’t have enough time.” Ritesh Shah - Training – useful or useless? Ritesh brings up very important concept related to training. Ritesh in his meticulous style explains why training is an important and lifelong process. Training must not stop at any age but should continue forever. The moment training stops, progress stops along with. Paras Doshi - Professional Development Resource Paras is known for his to–the-point writing, and has summarized the five part series very precisely. He read the five part series and created a digest summary of the blog post. If you are in a rush and have no time to read my five series – I suggest you read his blog post. Training Resources I am often asked what the best resources for learning new technology are. This is the most difficult question EVER. There are plenty of good training resources available. When it is about training our needs are different, our preference of learning is different and we all have an opinion. Additionally, we all are located in different geographic locations worldwide and there is no way one solution will fit all. However, let me list a few of the training resources which I have built so far and you can consume them if you find it relevant to your need. SQL Server Books SQL Server Interview Questions and Answers SQL Wait Stats SQL Programming Joes 2 Pros SQL Server Video Tutorials SQL Server Questions and Answers SQL Server Performance: Indexing Basics SQL Server Performance: Introduction to Query Tuning SQL in Sixty Seconds Series of Sixty Seconds Learning Video on YouTube Trust me worldwide web is very big and there are plenty of high quality learning materials available worldwide – trainer-led as well online. I suggest you explore various options and make the best choice for yourself. Remember, training is your personal journey and it should never stop. Are you ready? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Seriously, It’s Time to Get Your Content Act Together

    - by Mike Stiles
    Branded content, content marketing, social content, brand journalism, we’re seeing those terms more and more. Why? The technology tools are coming together. We should know. We can gather big data, crunch it, listen to the public, moderate, respond, get to know the customer intimately, know what they like, know what they want, we can target, distribute, amplify, measure engagement and reaction, modify strategy and even automate a great deal of all that. An amazing machine, a sleek, smooth-running engine has been built such that all the parts can interact and work together to deliver peak performance and maximum output. But that engine isn’t going anywhere without any gas. Content is the gas. Yes, we curate other people’s content. We can siphon their gas. There’s tech to help with that too. But as for the creation of original, worthwhile content made for a specific audience, our audience, machines can’t do that…at least not yet. Curated content is great. But somebody has to originate the content for it to be curated and shared. And since the need for good, curated content is obviously large and the desire to share is there, it’s a winning proposition for a brand to be a consistent producer of original content. And yet, it feels like content is an issue we’re avoiding. There’s a reluctance to build a massive pipeline if you have no idea what you’re going to run through it. The C-suite often doesn’t know what content is, that it’s different from ads, where to get it, who makes it, how long it should be, what the point of it is if there’s no hard sell of the product, what it costs, how to use it, how to measure it, how to make sure it’s good, or how to make sure it will keep flowing. It could be the reason many brands aren’t pulling the trigger on socially enabling the enterprise. And that’s a shame, because there are a lot of creative, daring, experimental, uniquely talented entertainers and journalists chomping at the bit to execute content for brands. But for many corporate executives, content is “weird,” and the people who make it are even weirder. The content side of the equation is human. It’s art, but art that can be informed by data. The natural inclination is for brands to turn to their agencies for such creative endeavors. But agencies are falling into one of two categories. They’re failing to transition from ads to content. In “Content Era, What’s the Role of Agencies?” Alexander Jutkowitz says agencies were made for one-hit campaigns, not ongoing content. Or, they’re ready and capable but can’t get clients to do the right things. Agencies have to make money, even if it means continuing to do the wrong things because that’s all the client will agree to. So what we wind up with in the pipeline is advertising, marketing-heavy content, content that was obviously created or spearheaded by non-creative executives, random & inconsistent content, copy written for SEO bots, and other completely uninteresting nightmares. Frank Rose, author of “The Art of Immersion,” writes, “Content without story and excitement is noise pollution.” In the old days, you made an ad and inserted it into shows made by people who knew what they were doing. You could bask in that show’s success and leverage their audience. Now, you are tasked with attracting, amassing and holding your own audience. You may just want to make, advertise and sell your widgets. But now there’s a war on for a precious commodity, attention. People are busy. They have filters to keep uninteresting and irrelevant things out. They value their time and expect value back when they give it up. Joe Pulizzi, founder of the Content Marketing Institute, says, "Your customers don't care about you, your products, your services…they care about themselves, their wants and their needs." Is it worth getting serious about content and doing it right? 61% of consumers feel better about a company that delivers custom content (Custom Content Council). Interesting content is one of the top 3 reasons people follow brands on social (Content+). 78% of consumers think organizations that provide custom content want to build good relationships with them (TMG Custom Media). On the B2B side, 80% of business decision makers prefer to get company info in a series of articles vs. an ad. So what’s the hang-up? Cited barriers to content marketing are lack of human resources (42%) and lack of budget (35%). 54% of brands don’t have a single on-site, dedicated content creator. And only 38% of brands have a content marketing strategy. Tech has built the biggest, most incredible stage for brands that’s ever been built. Putting something on that stage is your responsibility. Do a bad show, or no show at all, and you’ll be the beautiful, talented actress that never got discovered. @mikestilesPhoto: Gabriella Fabbri, stock.xchng

    Read the article

  • Tweaking a few URL validation settings on ASP.NET v4.0

    - by Carlyle Dacosta
    ASP.NET has a few default settings for URLs out of the box. These can be configured quite easily in the web.config file within the  <system.web>/<httpRuntime> configuration section. Some of these are: <httpRuntime maxUrlLength=”<number here>”. This number should be an integer value (defaults to 260 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. This attribute gates the length of the Url without query string. <httpRuntime maxQueryStringLength=”<number here>”. This number should be an integer value (defaults to 2048 characters). The value must be greater than or equal to zero, though obviously small values will lead to an un-useable website. <httpRuntime requestPathInvalidCharacters=”List of characters you need included in ASP.NETs validation checks”. By default the characters are “<,>,*,%,&,:,\,?”. However once can easily change this by setting by modifying web.config. Remember, these characters can be specified in a variety of formats. For example, I want the character ‘!’ to be included in ASP.NETs URL validation logic. So I set the following: <httpRuntime requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,!”. A character could also be specified in its xml encoded form. ‘&lt;;’ would mean the ‘<’ sign). I could specify the ‘!’ in its xml encoded unicode format such as requestPathInvalidCharacters=”<,>,*,%,&,:,\,?,$#x0021;” or I could specify it in its unicode encoded form or in the “<,>,*,%,&,:,\,?,%u0021” format. The following settings can be applied at Root Web.Config level, App Web.config level, Folder level or within a location tag: <location path="some path here"> <system.web> <httpRuntime maxUrlLength="" maxQueryStringLength="" requestPathInvalidChars="" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If any of the above settings fail request validation, an Http 400 “Bad Request” HttpException is thrown. These can be easily handled on the Application_Error handler on Global.asax.   Also, a new attribute in <httpRuntime /> called “relaxedUrlToFileSystemMapping” has been added with a default of false. <httpRuntime … relaxedUrlToFileSystemMapping="true|false" /> When the relaxedUrlToFileSystemMapping attribute is set to false inbound Urls still need to be valid NTFS file paths. For example Urls (sans query string) need to be less than 260 characters; no path segment within a Url can use old-style DOS device names (LPT1, COM1, etc…); Urls must be valid Windows file paths. A url like “http://digg.com/http://cnn.com” should work with this attribute set to true (of course a few characters will need to be unblocked by removing them from requestPathInvalidCharacters="" above). Managed configuration for non-NTFS-compliant Urls is determined from the first valid configuration path found when walking up the path segments of the Url. For example, if the request Url is "/foo/bar/baz/<blah>data</blah>", and there is a web.config in the "/foo/bar" directory, then the managed configuration for the request comes from merging the configuration hierarchy to include the web.config from "/foo/bar". The value of the public property HttpRequest.PhysicalPath is set to [physical file path of the application root] + "REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". For example, given a request Url like "/foo/bar/baz/<blah>data</blah>", where the application root is "/foo/bar" and the physical file path for that root is "c:\inetpub\wwwroot\foo\bar", then PhysicalPath would be "c:\inetpub\wwwroot\foo\bar\ REQUEST_URL_IS_NOT_A_VALID_FILESYSTEM_PATH". Carl Dacosta ASP.NET QA Team

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >