Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 138/854 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • PHP Mail() to Gmail = Spam

    - by grantw
    Recently Gmail has started marking emails sent directly from my server (using php mail()) as spam and I'm having problems trying to find the issue. If I send an exact copy of the same email from my email client it goes to the Gmail inbox. The emails are plain text, around 7 lines long and contain a URL link in plain text. As the emails sent from my client are getting through fine I'm thinking that the content isn't the issue. It would be greatly appreciated if someone could take a look at the the following headers and give me some advice why the email from the server is being marked as spam. Email from Server: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101784qeb; Thu, 15 Nov 2012 14:58:52 -0800 (PST) Received: by 10.60.27.166 with SMTP id u6mr2296595oeg.86.1353020331940; Thu, 15 Nov 2012 14:58:51 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id df4si17005013obc.50.2012.11.15.14.58.51 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:58:51 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Date:Message-Id:Content-Type:Reply-to:From:Subject:To; bh=2RJ9jsEaGcdcgJ1HMJgQG8QNvWevySWXIFRDqdY7EAM=; b=mGebBVOkyUhv94ONL3EabXeTgVznsT1VAwPdVvpOGDdjBtN1FabnuFi8sWbf5KEg5BUJ/h8fQ+9/2nrj+jbtoVLvKXI6L53HOXPjl7atCX9e41GkrOTAPw5ZFp+1lDbZ; Received: from grantw by dom.domainbrokerage.co.uk with local (Exim 4.80) (envelope-from [email protected]) id 1TZ8OZ-0008qC-Gy for [email protected]; Thu, 15 Nov 2012 22:58:51 +0000 To: [email protected] Subject: Offer Accepted X-PHP-Script: www.domainbrokerage.co.uk/admin.php for 95.172.231.27 From: My Name [email protected] Reply-to: [email protected] Content-Type: text/plain; charset=Windows-1251 Message-Id: [email protected] Date: Thu, 15 Nov 2012 22:58:51 +0000 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [500 500] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: grantw/from_h Email from client: Delivered-To: [email protected] Received: by 10.49.98.228 with SMTP id el4csp101495qeb; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received: by 10.182.197.8 with SMTP id iq8mr2351185obc.66.1353020089244; Thu, 15 Nov 2012 14:54:49 -0800 (PST) Return-Path: [email protected] Received: from dom.domainbrokerage.co.uk (dom.domainbrokerage.co.uk. [174.120.246.138]) by mx.google.com with ESMTPS id ab5si17000486obc.44.2012.11.15.14.54.48 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 15 Nov 2012 14:54:49 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) client-ip=174.120.246.138; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 174.120.246.138 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=domainbrokerage.co.uk; s=default; h=Content-Transfer-Encoding:Content-Type:Subject:To:MIME-Version:From:Date:Message-ID; bh=bKNjm+yTFZQ7HUjO3lKPp9HosUBfFxv9+oqV+NuIkdU=; b=j0T2XNBuENSFG85QWeRdJ2MUgW2BvGROBNL3zvjwOLoFeyHRU3B4M+lt6m1X+OLHfJJqcoR0+GS9p/TWn4jylKCF13xozAOc6ewZ3/4Xj/YUDXuHkzmCMiNxVcGETD7l; Received: from w-27.cust-7941.ip.static.uno.uk.net ([95.172.231.27]:1450 helo=[127.0.0.1]) by dom.domainbrokerage.co.uk with esmtpa (Exim 4.80) (envelope-from [email protected]) id 1TZ8Ke-0001XH-7p for [email protected]; Thu, 15 Nov 2012 22:54:48 +0000 Message-ID: [email protected] Date: Thu, 15 Nov 2012 22:54:50 +0000 From: My Name [email protected] User-Agent: Postbox 3.0.6 (Windows/20121031) MIME-Version: 1.0 To: [email protected] Subject: Offer Accepted Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - dom.domainbrokerage.co.uk X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - domainbrokerage.co.uk X-Get-Message-Sender-Via: dom.domainbrokerage.co.uk: authenticated_id: [email protected]

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • How can I get GrowlTunes to Launch whenever iTunes Launches?

    - by Orion751
    I think I would be able to do this by modifying iTunes' launch services. Any idea how to go about that? Would editing its info.plist file in a manner similar to below do what I'm looking for? <keyLSOpenApplication</key <string?</string EDIT: Would http://developer.apple.com/library/mac/#documentation/Carbon/Reference/LaunchServicesReference/Reference/reference.html%23//apple_ref/c/func/LSOpenApplication provide any hints? EDIT2: Last.fm's official Mac Scrobbler (http://www.last.fm/download) is a perfect example of the functionality that I'm looking for.

    Read the article

  • Keyboard shortcut / navigation references

    - by jerryjvl
    I use the mouse much too freely, and my wrists are not thanking me for it. I have been meaning to try and use the keyboard more as my sole means of navigating Windows, but I am having trouble sticking with it because when I need to do something and I cannot find the right shortcut, I grab the mouse and forget to let go of it again. Personally, the main software that I need keyboard reference sheets for would be: Firefox Thunderbird Visual Studio Windows itself But I would encourage more general inclusion of shortcut references in the answers in case anyone else tries to make the same transition I am attempting ;) What I am looking for is reference material that is as comprehensive as possible so that over time I can hopefully learn to do everything with the keyboard and spare my wrists. Bonus points for references that can be printed in a reasonable size so I can keep them next to my machine in hardcopy. I know there is an answer for Windows already: Is there a definitive reference for Windows shortcuts keys?, but I am leaving it in my question in case anyone has a better printable alternative.

    Read the article

  • Can GnomeKeyring store passwords unencrypted?

    - by antimeme
    I have a Fedora 15 laptop with the root and home partitions encrypted using LUKS. When it boots I have to enter a pass phrase to unlock the master key, so I have it configured to automatically log me in to my account. However, GnomeKeyring remains locked, so I have to enter another pass phrase for that. This is unpleasant and completely pointless since the entire disk is encrypted. I've not been able to find a way to configure GnomeKeyring to store its pass phrases without encryption. For example, I was not able to find an answer here: http://library.gnome.org/users/seahorse-plugins/stable/index.html.en Is there a solution? If not, is there a mailing list where it would be appropriate to plead my case?

    Read the article

  • Complex SQL query help on aggregating values for nested subquery

    - by François Beausoleil
    Hi! I have people, companies, employees, events and event kinds. I'm making a report/followup sheet where people, companies and employees are the rows, and the columns are event kinds. Event kinds are simple values describing: "Promised Donation", "Received Donation", "Phoned", "Followed up" and such. Event kinds are ordered: CREATE TABLE event_kinds ( id, name, position); Events hold the actual reference to the event: CREATE TABLE events ( id, person_id, company_id, referrer_id, event_kind_id, created_at); referrer_id is another reference to people. It is the person which sent the information/tip along, and is an optional field, although I sometimes want to filter on an event_kind that has a specific referrer, while I don't for other event kinds. Notice I don't have an employee ID reference. The reference exists, but is implied. I have application code to validate that person_id and company_id really reference an employee record. The other tables are pretty basic: CREATE TABLE people ( id, name); CREATE TABLE companies ( id, name); CREATE TABLE employees ( id, person_id, company_id); I'm trying to achieve the following report: Referrer Phoned Promised Donated Francois Feb 16th Feb 20th Mar 1st Apple (Steve Jobs) Steve Ballmer Mar 3rd IBM Bill Gates Mar 7th The first row is a people record, the 2nd is an employee, and the 3rd is a company. If I asked for referrer Bill Gates for Phoned event kinds, I'd only see the 3rd row, while asking for Steve and Phoned would return no rows. Right now, I do 3 queries, one for companies, one for people and a last one for employees. I want the event kind columns to be ordered, but I do that in application code and show it properly there. Here's where I'm at so far: SELECT companies.id, companies.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "companies" SELECT people.id, people.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "people" SELECT employees.id, employees.company_id, employees.person_id, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "employees" I rather suspect I'm doing this wrong. There should be an "easier" way to do it. One other filter criteria would be to filter on people/company names: WHERE LOWER(companies.name) LIKE '%apple%'. Note that I'm ordering by the dates of event_kind_9 here, and a secondary sort is by person/company name. To summarize: I want to paginate the result set, find the latest event for each cell, order the result set by the date of the latest event, and by company/person name, filter by referrer in some event kinds, but not others. For reference, I'm using PostgreSQL, from Ruby, ActiveRecord/Rails. The solution is pure SQL though.

    Read the article

  • Using Bundle and Intent with TabHost

    - by apesa
    I am using TabHost with 3 tabs. I need to pass the params selected from one screen using Bundle and / or Intent to the next and then set the correct tab in TabHost and pass those params to the correct tab. I hope that makes sense. I have a config screen that has several radio buttons that are grouped and 1 checkbox and a button. in my onClick() I have the following code. public class Distribute extends Activity implements OnClickListener { DistributionMap gixnav; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TextView textview = new TextView(this); textview.setText("Distribution"); setContentView(R.drawable.configup); Button button = (Button)findViewById(R.id.btn_configup1); button.setOnClickListener(this); } public void onClick(View v) { Intent intent; Bundle extras = new Bundle(); intent = new Intent().setClass(getApplicationContext(), Clevel.class); intent.putExtras(extras); startActivity(intent); } } I need to pass the selection params (which radio button is selected and is the checkbox clicked to Clevel. In Clevel I have to parse the bundle and operate on those params. Basiclaly I will be pulling data froma DB and using that data to call google maps ItemizedOverlay. onClick calls Clevel.class using Intent. This works and I understand how Intent works. What I need to understand is how to grab or reference the selected radio button and whatever else may be clicked or checked and pass it through TabHost to the correct Tab. This is what I have in Clevel for TabHost. From TabHost the onCLick will need to pass everything to Distribute.class public class Clevel extends TabActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.gixout1); Resources res = getResources(); TabHost tabHost = getTabHost(); TabHost.TabSpec spec; Intent intent; String mData; Bundle extras = getIntent().getExtras(); if (extras != null) { mData = extras.getString("key"); } intent = new Intent().setClass(this, ClevelMain.class); spec = tabHost.newTabSpec("Main").setIndicator("C-Level", res.getDrawable(R.drawable.gixmain)) .setContent(intent); tabHost.addTab(spec); intent = new Intent().setClass(this, Distribute.class); spec = tabHost.newTabSpec("Config").setIndicator("Distribute", res.getDrawable(R.drawable.gixconfig)) .setContent(intent); tabHost.addTab(spec); intent = new Intent().setClass(this, DistributionMap.class); spec = tabHost.newTabSpec("Nav").setIndicator("Map", res.getDrawable(R.drawable.gixnav)) .setContent(intent); tabHost.addTab(spec); tabHost.setCurrentTab(3); tabHost.getOnFocusChangeListener(); } I am really looking for some pointers on how to pass and use params in Bundle and whether in should use Bundle and Intent or can I just use Intent? Thanks in advance, Pat

    Read the article

  • globals and locals in python exec()

    - by hawkettc
    Hi, I'm trying to run a piece of python code using exec. my_code = """ class A(object): pass print 'locals: %s' % locals() print 'A: %s' % A class B(object): a_ref = A """ global_env = {} local_env = {} my_code_AST = compile(my_code, "My Code", "exec") exec(my_code_AST, global_env, local_env) print local_env which results in the following output locals: {'A': <class 'A'>} A: <class 'A'> Traceback (most recent call last): File "python_test.py", line 16, in <module> exec(my_code_AST, global_env, local_env) File "My Code", line 8, in <module> File "My Code", line 9, in B NameError: name 'A' is not defined However, if I change the code to this - my_code = """ class A(object): pass print 'locals: %s' % locals() print 'A: %s' % A class B(A): pass """ global_env = {} local_env = {} my_code_AST = compile(my_code, "My Code", "exec") exec(my_code_AST, global_env, local_env) print local_env then it works fine - giving the following output - locals: {'A': <class 'A'>} A: <class 'A'> {'A': <class 'A'>, 'B': <class 'B'>} Clearly A is present and accessible - what's going wrong in the first piece of code? I'm using 2.6.5, cheers, Colin * UPDATE 1 * If I check the locals() inside the class - my_code = """ class A(object): pass print 'locals: %s' % locals() print 'A: %s' % A class B(object): print locals() a_ref = A """ global_env = {} local_env = {} my_code_AST = compile(my_code, "My Code", "exec") exec(my_code_AST, global_env, local_env) print local_env Then it becomes clear that locals() is not the same in both places - locals: {'A': <class 'A'>} A: <class 'A'> {'__module__': '__builtin__'} Traceback (most recent call last): File "python_test.py", line 16, in <module> exec(my_code_AST, global_env, local_env) File "My Code", line 8, in <module> File "My Code", line 10, in B NameError: name 'A' is not defined However, if I do this, there is no problem - def f(): class A(object): pass class B(object): a_ref = A f() print 'Finished OK' * UPDATE 2 * ok, so the docs here - http://docs.python.org/reference/executionmodel.html 'A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution. The namespace of the class definition becomes the attribute dictionary of the class. Names defined at the class scope are not visible in methods.' It seems to me that 'A' should be made available as a free variable within the executable statement that is the definition of B, and this happens when we call f(), but not when we use exec(). This can be more easily shown with the following - my_code = """ class A(object): pass print 'locals in body: %s' % locals() print 'A: %s' % A def f(): print 'A in f: %s' % A f() class B(object): a_ref = A """ which outputs locals in body: {'A': <class 'A'>} A: <class 'A'> Traceback (most recent call last): File "python_test.py", line 20, in <module> exec(my_code_AST, global_env, local_env) File "My Code", line 11, in <module> File "My Code", line 9, in f NameError: global name 'A' is not defined So I guess the new question is - why aren't those locals being exposed as free variables in functions and class definitions - it seems like a pretty standard closure scenario.

    Read the article

  • Visual Studio 2010 Extension Manager (and the new VS 2010 PowerCommands Extension)

    - by ScottGu
    This is the twenty-third in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers some of the extensibility improvements made in VS 2010 – as well as a cool new "PowerCommands for Visual Studio 2010” extension that Microsoft just released (and which can be downloaded and used for free). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Extensibility in VS 2010 VS 2010 provides a much richer extensibility model than previous releases.  Anyone can build extensions that add, customize, and light-up the Visual Studio 2010 IDE, Code Editors, Project System and associated Designers. VS 2010 Extensions can be created using the new MEF (Managed Extensibility Framework) which is built-into .NET 4.  You can learn more about how to create VS 2010 extensions from this this blog post from the Visual Studio Team Blog. VS 2010 Extension Manager Developers building extensions can distribute them on their own (via their own web-sites or by selling them).  Visual Studio 2010 also now includes a built-in “Extension Manager” within the IDE that makes it much easier for developers to find, download, and enable extensions online.  You can launch the “Extension Manager” by selecting the Tools->Extension Manager menu option: This loads an “Extension Manager” dialog which accesses an “online gallery” at Microsoft, and then populates a list of available extensions that you can optionally download and enable within your copy of Visual Studio: There are already hundreds of cool extensions populated within the online gallery.  You can browse them by category (use the tree-view on the top-left to filter them).  Clicking “download” on any of the extensions will download, install, and enable it. PowerCommands for Visual Studio 2010 This weekend Microsoft released the free PowerCommands for Visual Studio 2010 extension to the online gallery.  You can learn more about it here, and download and install it via the “Extension Manager” above (search for PowerCommands to find it). The PowerCommands download adds dozens of useful commands to Visual Studio 2010.  Below is a screen-shot of just a few of the useful commands that it adds to the Solution Explorer context menus: Below is a list of all the commands included with this weekend’s PowerCommands for Visual Studio 2010 release: Enable/Disable PowerCommands in Options dialog This feature allows you to select which commands to enable in the Visual Studio IDE. Point to the Tools menu, then click Options. Expand the PowerCommands options, then click Commands. Check the commands you would like to enable. Note: All power commands are initially defaulted Enabled. Format document on save / Remove and Sort Usings on save The Format document on save option formats the tabs, spaces, and so on of the document being saved. It is equivalent to pointing to the Edit menu, clicking Advanced, and then clicking Format Document. The Remove and sort usings option removes unused using statements and sorts the remaining using statements in the document being saved. Note: The Remove and sort usings option is only available for C# documents. Format document on save and Remove and sort usings both are initially defaulted OFF. Clear All Panes This command clears all output panes. It can be executed from the button on the toolbar of the Output window. Copy Path This command copies the full path of the currently selected item to the clipboard. It can be executed by right-clicking one of these nodes in the Solution Explorer: The solution node; A project node; Any project item node; Any folder. Email CodeSnippet To email the lines of text you select in the code editor, right-click anywhere in the editor and then click Email CodeSnippet. Insert Guid Attribute This command adds a Guid attribute to a selected class. From the code editor, right-click anywhere within the class definition, then click Insert Guid Attribute. Show All Files This command shows the hidden files in all projects displayed in the Solution Explorer when the solution node is selected. It enhances the Show All Files button, which normally shows only the hidden files in the selected project node. Undo Close This command reopens a closed document , returning the cursor to its last position. To reopen the most recently closed document, point to the Edit menu, then click Undo Close. Alternately, you can use the CtrlShiftZ shortcut. To reopen any other recently closed document, point to the View menu, click Other Windows, and then click Undo Close Window. The Undo Close window appears, typically next to the Output window. Double-click any document in the list to reopen it. Collapse Projects This command collapses a project or projects in the Solution Explorer starting from the root selected node. Collapsing a project can increase the readability of the solution. This command can be executed from three different places: solution, solution folders and project nodes respectively. Copy Class This command copies a selected class entire content to the clipboard, renaming the class. This command is normally followed by a Paste Class command, which renames the class to avoid a compilation error. It can be executed from a single project item or a project item with dependent sub items. Paste Class This command pastes a class entire content from the clipboard, renaming the class to avoid a compilation error. This command is normally preceded by a Copy Class command. It can be executed from a project or folder node. Copy References This command copies a reference or set of references to the clipboard. It can be executed from the references node, a single reference node or set of reference nodes. Paste References This command pastes a reference or set of references from the clipboard. It can be executed from different places depending on the type of project. For CSharp projects it can be executed from the references node. For Visual Basic and Website projects it can be executed from the project node. Copy As Project Reference This command copies a project as a project reference to the clipboard. It can be executed from a project node. Edit Project File This command opens the MSBuild project file for a selected project inside Visual Studio. It combines the existing Unload Project and Edit Project commands. Open Containing Folder This command opens a Windows Explorer window pointing to the physical path of a selected item. It can be executed from a project item node Open Command Prompt This command opens a Visual Studio command prompt pointing to the physical path of a selected item. It can be executed from four different places: solution, project, folder and project item nodes respectively. Unload Projects This command unloads all projects in a solution. This can be useful in MSBuild scenarios when multiple projects are being edited. This command can be executed from the solution node. Reload Projects This command reloads all unloaded projects in a solution. It can be executed from the solution node. Remove and Sort Usings This command removes and sort using statements for all classes given a project. It is useful, for example, in removing or organizing the using statements generated by a wizard. This command can be executed from a solution node or a single project node. Extract Constant This command creates a constant definition statement for a selected text. Extracting a constant effectively names a literal value, which can improve readability. This command can be executed from the code editor by right-clicking selected text. Clear Recent File List This command clears the Visual Studio recent file list. The Clear Recent File List command brings up a Clear File dialog which allows any or all recent files to be selected. Clear Recent Project List This command clears the Visual Studio recent project list. The Clear Recent Project List command brings up a Clear File dialog which allows any or all recent projects to be selected. Transform Templates This command executes a custom tool with associated text templates items. It can be executed from a DSL project node or a DSL folder node. Close All This command closes all documents. It can be executed from a document tab. How to temporarily disable extensions Extensions provide a great way to make Visual Studio even more powerful, and can help improve your overall productivity.  One thing to keep in mind, though, is that extensions run within the Visual Studio process (DevEnv.exe) and so a bug within an extension can impact both the stability and performance of Visual Studio.  If you ever run into a situation where things seem slower than they should, or if you crash repeatedly, please temporarily disable any installed extensions and see if that fixes the problem.  You can do this for extensions that were installed via the online gallery by re-running the extension manager (using the Tools->Extension Manager menu option) and by selecting the “Installed Extensions” node on the top-left of the dialog – and then by clicking “Disable” on any of the extensions within your installed list: Hope this helps, Scott

    Read the article

  • Nice Generic Example that implements an interface.

    - by mbcrump
    I created this quick generic example after noticing that several people were asking questions about it. If you have any questions then let me know. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Globalization; namespace ConsoleApplication4 { //New class where Type implements IConvertible interface (interface = contract) class Calculate<T> where T : IConvertible { //Setup fields public T X; NumberFormatInfo fmt = NumberFormatInfo.CurrentInfo; //Constructor 1 public Calculate() { X = default(T); } //Constructor 2 public Calculate (T x) { X = x; } //Method that we know will return a double public double DistanceTo (Calculate<T> cal) { //Remove the.ToDouble if you want to see the methods available for IConvertible return (X.ToDouble(fmt) - cal.X.ToDouble(fmt)); } } class Program { static void Main(string[] args) { //Pass value type and call DistanceTo with an Int. Calculate<int> cal = new Calculate<int>(); Calculate<int> cal2 = new Calculate<int>(10); Console.WriteLine("Int : " + cal.DistanceTo(cal2)); //Pass value type and call DistanceTo with an Double. Calculate<double> cal3 = new Calculate<double>(); Calculate<double> cal4 = new Calculate<double>(10.6); Console.WriteLine("Double : " + cal3.DistanceTo(cal4)); //Pass reference type and call DistanceTo with an String. Calculate<string> cal5 = new Calculate<string>("0"); Calculate<string> cal6 = new Calculate<string>("345"); Console.WriteLine("String : " + cal5.DistanceTo(cal6)); } } }

    Read the article

  • Show Notes: Bob Hensle on IT Strategies from Oracle

    - by Bob Rhubart
    The latest ArchBeat Podcast (RSS) features a conversation with Oracle Enterprise Architecture director Bob Hensle (LinkedIn). Bob talks about IT Strategies from Oracle, an extensive library of reference architectures, best practices, and other documents now available (it’s a freebie!) to registered Oracle Technology Network members. Listen to Part 1 Bob offers some background on the IT Strategies from Oracle project and an overview of the included documents. Listen to Part 2 (Feb 16) A discussion of how SOA and other issues are reflected in the IT Strategies documents. Share your feedback on any of the documents in the IT Strategies from Oracle Library: [email protected] For a nice complement to the IT Strategies from Oracle Library, check out Oracle Experiences in Enterprise Architecture, an ongoing series of short essays from members of the Oracle Enterprise Architecture team based on their field experience. In the Pipeline ArchBeat programs in the works include an interview with Dr. Frank Munz, the author of Middleware and Cloud Computing, excerpts from another architect virtual meet-up, and a conversation with Oracle ACE Director Debra Lilley about her insight into Fusion Applications. . Stayed tuned: RSS Technorati Tags: oracle,oracle technology network,software architecture,enterprise architecture,reference architecture del.icio.us Tags: oracle,oracle technology network,software architecture,enterprise architecture,reference architecture

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Free Certification Exams for Visual Studio 2010

    - by budugu
    Get promotional codes from herehttp://blogs.msdn.com/gerryo/archive/2010/03/17/register-for-visual-studio-2010-beta-exams.aspx You don’t have to pay anything to take these exams.  These are 100% free. If you pass the exam, you earn the certification just the same as if you took it in a non-beta environment. From Gerry O'Brien’s blog...  2) Is this a real exam? – Yes it is.  Even though the questions are not scored at the time you take the exam, they are real questions and the exam is real.  If you pass the exam, you earn the certification just the same as if you took it in a non-beta environment.  This means you don’t get a pass/fail or score immediately following the exam, but you do get notified 8 to 10 weeks later because we move slow in getting the final scoring in place.  4) What is the main difference between a beta and non-beta exam, besides cost? – The beta exam will show you questions that have not been through a final QA check.  You are that final QA check.  Non-beta exams expose you to 40 or 45 questions and you have a total of two hours to complete it.  The beta exam could expose you to as many as 125 to 150 questions and take up to four hours.   Following exams are for Asp.Net developers Exam 71-515, TS: Web Applications Development with Microsoft .NET Framework 4Exam 71-519: Pro: Designing and Developing Web Applications Using Microsoft .NET Framework 4

    Read the article

  • Reading and conditionally updating N rows, where N > 100,000 for DNA Sequence processing

    - by makerofthings7
    I have a proof of concept application that uses Azure tables to associate DNA sequences to "something". Table 1 is the master table. It uniquely lists every DNA sequence. The PK is a load balanced hash of the RK. The RK is the unique encoded value of the DNA sequence. Additional tables are created per subject. Each subject has a list of N DNA sequences that have one reference in the Master table, where N is 100,000. It is possible for many tables to reference the same DNA sequence, but in this case only one entry will be present in the Master table. My Azure dilemma: I need to lock the reference in the Master table as I work with the data. I need to handle timeouts, and prevent other threads from overwriting my data as one C# thread is working with the information. Other threads need to realise that this is locked, and move onto other unlocked records and do the work. Ideally I'd like to get some progress report of how my computation is going, and have the option to cancel the process (and unwind the locks). Question What is the best approach for this? I'm looking at these code snippets for inspiration: http://blogs.msdn.com/b/jimoneil/archive/2010/10/05/azure-home-part-7-asynchronous-table-storage-pagination.aspx http://stackoverflow.com/q/4535740/328397

    Read the article

  • Visual Studio 2010 Winform Application &ndash; Unable to resolve custom assemblies?

    - by Harish Ranganathan
    Recently I surfaced a problem where, one of my friend had a tough time in getting rid of an assembly reference error.  Despite adding reference to the assembly, while referencing it in code, it was spitting out the “The type or namespace name ‘ASSEMBLYNAME’ could not be found” error.   This was a migration project and owing to the above error, it was throwing another 100 errors. We tried adding reference to the assembly in other projects and it was not even resolving the namespace while typing out in the using section. Upon further digging into the error warnings, it indicated something to do with the .NET Framework targeted i.e. 4.0.  My suspicion grew since the target framework was 4.0 and the assembly should be able to be recognized.  Then, when we checked “Project – “<APPNAME> Properties…”, the issue was with the default target framework which is “.NET Framework 4 Client Profile” By default, Visual Studio 2010 creates Windows Forms App/WPF Apps with the Target Framework set to .NET Framework 4 Client Profile.  This is to minimize the framework size required to be bundled along with the app. Client Profile is new feature since .NET 3.5 SP1 that allows users to package a minified version of .NET Framework that doesn’t include stuff such as ASP.NET, Server programming assemblies and few other assemblies which are typically never used in the Desktop Applications. Since the .NET Framework client profile is a minified version, it doesn’t contain all the assemblies related to Web services and other deprecated assemblies.  However, this application is a migration app and needed some of the references from Services and hence couldn’t run. Once, we changed the Target Framework to .NET Framework 4 instead of the default client profile, the application compiled. Here is link to a very nice article that explains the features of .NET Framework 4 client Profile, the assemblies supported by default etc., http://blogs.msdn.com/b/jgoldb/archive/2010/04/12/what-s-new-in-net-framework-4-client-profile-rtm.aspx Cheers !!

    Read the article

  • SQLAuthority News – 2 Whitepapers Announced – AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution

    - by pinaldave
    Understanding AlwaysOn Architecture is extremely important when building a solution with failover clusters and availability groups. Microsoft has just released two very important white papers related to this subject. Both the white papers are written by top experts in industry and have been reviewed by excellent panel of experts. Every time I talk with various organizations who are adopting the SQL Server 2012 they are always excited with the concept of the new feature AlwaysOn. One of the requests I often here is the related to detailed documentations which can help enterprises to build a robust high availability and disaster recovery solution. I believe following two white paper now satisfies the request. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using AlwaysOn Availability Groups SQL Server 2012 AlwaysOn Availability Groups provides a unified high availability and disaster recovery (HADR) solution. This paper details the key topology requirements of this specific design pattern on important concepts like quorum configuration considerations, steps required to build the environment, and a workflow that shows how to handle a disaster recovery. AlwaysOn Architecture Guide: Building a High Availability and Disaster Recovery Solution by Using Failover Cluster Instances and Availability Groups SQL Server 2012 AlwaysOn Failover Cluster Instances (FCI) and AlwaysOn Availability Groups provide a comprehensive high availability and disaster recovery solution. This paper details the key topology requirements of this specific design pattern on important concepts like asymmetric storage considerations, quorum model selection, quorum votes, steps required to build the environment, and a workflow. If you are not going to implement AlwaysOn feature, this two Whitepapers are still a great reference material to review as it will give you complete idea regarding what it takes to implement AlwaysOn architecture and what kind of efforts needed. One should at least bookmark above two white papers for future reference. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • TDWI World Conference Features Oracle and Big Data

    - by Mandy Ho
    Oracle is a Gold Sponsor at this year's TDWI World Conference Series, held at the Manchester Grand Hyatt in San Diego, California - July 31 to Aug 1. The theme of this event is Big Data Tipping Point: BI Strategies in the Era of Big Data. The conference features an educational look at how data is now being generated so quickly that organizations across all industries need new technologies to stay ahead - to understand customer behavior, detect fraud, improve processes and accelerate performance. Attendees will hear how the internet, social media and streaming data are fundamentally changing business intelligence and data warehousing. Big data is reaching critical mass - the tipping point. Oracle will be conducting the following Evening Workshop. To reserve your space, call 1.800.820.5592 ext 10775. Title...:    Integrating Big Data into Your Data Center (or A Big Data Reference Architecture) Date.:    Wed., August 1, 2012, at 7:00 p.m Venue:: Manchester Grand Hyatt, San Diego, Room Weblogs, Social Media, smart meters, senors and other devices generate high volumes of low density information that isn't readily accessible in enterprise data warehouses and business intelligence applications today. But, this data can have relevant business value, especially when analyzed alongside traditional information sources. In this session, we will outline a reference architecture for big data that will help you maximize the value of your big data implementation. You will learn: The key technologies in a big architecture, and their specific use case The integration point of the various technologies and how they fit into your existing IT environment How effectively leverage analytical sandboxes for data discovery and agile development of data driven solutions   At the end of this session you will understand the reference architecture and have the tools to implement this architecture at your company. Presenter: Jean-Pierre Dijcks, Senior Principal Product Manager Don't miss our booth and the chance to meet with our Big data experts on the exhibition floor at booth #306. 

    Read the article

  • Where do I place XNA content pipeline references?

    - by Zabby Wabby
    I am relatively new to XNA, and have started to delve into the use of the content pipeline. I have already figured out that tricky issue of adding a game library containing classes for any type of .xml file I want to read. Here's the issue. I am trying to handle the reading of all XML content through use of an XMLHandler object that uses the intermediate deserializer. Any time reading of such data is required, the appropriate method within this object would be called. So, as a simple example, something like this would occur when a character levels: public Spell LevelUp(int levelAchived) { XMLHandler.FindSkillsForLevel(levelAchived); } This method would then read the proper .xml file, sending back the spell for the character to learn. However, the XMLHandler is having issues even being created. I cannot get it to use the using namespace of Microsoft.Xna.Framework.Content.Pipeline. I get an error on my using statement in the XMLHandler class: using Microsoft.Xna.Framework.Content.Pipeline.Serialization.Intermediate; The error is a typical reference error: Type or namespace name "'Pipeline' does not exist in the namespace 'Microsoft.Xna.Framework.Content' (are you missing an assembly reference?)" I THINK this is because this namespace is already referenced in my game's content. I would really have no issue placing this object within my game's content (since that is ALL it deals with anyways), but the Content project does not seem capable of holding anything but content files. In summary, I need to use the Intermediate Deserializer in my main project's logic, but, as far as I can make out, I can't safely reference the associated namespace for it outside of the game's content. I'm not a terribly well-versed programmer, so I may be just missing some big detail I've never learned here. How can I make this object accessible for all projects within the solution? I will gladly post more information if needed!

    Read the article

  • MCSE and MCSA makes a return to the world of certification..... but not as you know it.

    - by Testas
    Quick announcementMicrosoft Learning today announced the certification tracks for the upcoming SQL Server 2012 exams.You begin by acheiving the MCSA - Microsoft Certified Solutions Associate (Not to be confused by the old Microsoft Certified System Administrator)If you are starting out this includes taking the following three exams:Exam 70-461: Querying Microsoft SQL Server 2012Exam 70-462: Administering Microsoft SQL Server 2012 DatabasesExam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012If you have an MCTS in SQL Server 2008 already you can take the following pathA pass in a SQL Server 2008 (MCTS) Microsoft Certified Technology Specialist examExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Once you have achieved you MCSA status you can then start for your MCSE - Microsoft Certified Solutions Expert certificationYou have a choice, to do the MCSE: SQL Server 2012 Data Platform, MCSE: SQL Server 2012 Business Intelligence or you could do bothMCSE: SQL Server 2012 Data Platform involvesObtain your SQL Server 2012 MCSAExam 70-464: Developing Microsoft SQL Server 2012 DatabasesExam 70-465: Designing Database Solutions for Microsoft SQL Server 2012There is also an upgrade pathA pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Database Administrator or Database Developer CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-459: transisitioning your MCITP on SQL Server 2008 Database Administrator or Database Developer to MCSE:Data PlatformMCSE: SQL Server 2012 Business Intelligence involvesObtain your SQL Server 2012 MCSAExam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012The upgrade path involves:A pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Business Intelligence CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-460: transisitioning your MCITP on SQL Server 2008 Business Intelligence Developer to MCSE:Business IntelligenceAs a result if you want to achieve the MCSE in either Data Platform or Business Intelligence and you are starting from scratch there will be 5 exams to takeIf you have the ability to upgrade your certification because you have an MCITP already then it will be three examsFull details and questions can be found at http://www.microsoft.com/learning/en/us/certification/cert-sql-server.aspxThanksChris

    Read the article

  • The UIManager Pattern

    - by Duncan Mills
    One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so. So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those binding attributes that map component references into a managed bean in your applications.  Why is it Such a Common Mistake?  At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed.   In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI.  Of course that's not the case.  The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem.  I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is.  That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope.  The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake. Turn it on its Head  So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern.  Defining the Pattern The  UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state.  The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml.  The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property.  So typically you'll have something like this:   <managed-bean>     <managed-bean-name>uiManager</managed-bean-name>     <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class>     <managed-bean-scope>session</managed-bean-scope>   </managed-bean>  When is then injected into any backing bean that needs it:    <managed-bean>     <managed-bean-name>mainPageBB</managed-bean-name>     <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class>     <managed-bean-scope>request</managed-bean-scope>     <managed-property>       <property-name>uiManager</property-name>       <property-class>oracle.demo.view.state.UIManager</property-class>       <value>#{uiManager}</value>     </managed-property>   </managed-bean> In this case the backing bean in question needs a member variable to hold and reference the UIManager: private UIManager _uiManager;  Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()).  This will then give your code within the backing bean full access to the UI state. UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this: <af:paneltabbed>   <af:showDetailItem text="First"                disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['FIRST']}"> ...   </af:showDetailItem>   <af:showDetailItem text="Second"                      disclosed="#{uiManager.settings['MAIN_TABSET_STATE'].['SECOND']}">     ...   </af:showDetailItem>   ... </af:panelTabbed> Where in this case the settings member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.) Get into the Habit So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain.  If you take the approach of always accessing all UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!

    Read the article

  • Advantages of Singleton Class over Static Class?

    Point 1)Singleton We can get the object of singleton and then pass to other methods.Static Class We can not pass static class to other methods as we pass objectsPoint 2) Singleton In future, it is easy to change the logic of of creating objects to some pooling mechanism. Static Class Very difficult to implement some pooling logic in case of static class. We would need to make that class as non-static and then make all the methods non-static methods, So entire your code needs to be changed.Point3:) Singleton Can Singletone class be inherited to subclass? Singleton class does not say any restriction of Inheritence. So we should be able to do this as long as subclass is also inheritence.There's nothing fundamentally wrong with subclassing a class that is intended to be a singleton. There are many reasons you might want to do it. and there are many ways to accomplish it. It depends on language you use.Static Class We can not inherit Static class to another Static class in C#. Think about it this way: you access static members via type name, like this: MyStaticType.MyStaticMember(); Were you to inherit from that class, you would have to access it via the new type name: MyNewType.MyStaticMember(); Thus, the new item bears no relationships to the original when used in code. There would be no way to take advantage of any inheritance relationship for things like polymorphism. span.fullpost {display:none;}

    Read the article

  • Downloading specific video renditions in WebCenter Content

    - by Kyle Hatlestad
    I recently had a question come up on one of my previous blog articles about downloading a specific video rendition.  When accessing image renditions, you simply need to pass in the 'Rendition=<rendition name>' parameter on the GET_FILE service and it will be returned.  But when you try that with videos, you get the error message, "Unable to download '<Content ID>'. The rendition or attachment '<Rendition Name>' could not be found in the list manifest of the revision with internal revision ID '<dID>'. Through the interface, it exposes the ability to download, but utilizes the Content Basket to bundle one or more videos and download them as a zip.   I had never tried this with videos, but thought they had worked the same way.  Well, it turns out you need to pass in an extra parameter in the case of videos.  So if you pass in parameter of 'AuxRenditionType=media', that will allow the GET_FILE service to download the video (e.g. http://server/cs/idcplg?IdcService=GET_FILE&dID=11012&dDocName=WCCBASE9010812&allowInterrupt=1 &Rendition=QuickTime&AuxRenditionType=media).  And if you haven't seen the David After Dentist video, I'd highly recommend it! 

    Read the article

  • How do you prefer to handle image spriting in your web projects?

    - by Macy Abbey
    It seems like these days it is pretty much mandatory for web applications to sprite images if they want many images on their site AND a fast load time. (Spriting is the process of combining all images referenced from a style sheet into one/few image(s) with each reference containing a different background position.) I was wondering what method of implementing sprites you all prefer in your web applications, given that we are referring to non-dynamic images which are included/designed by the programming team and not images which are dynamically uploaded by a third party. 1. Add new images to an existing sprite by hand, create new css reference by hand. 2. Generate a sprite server-side from your css files which all reference single images set to be background images of an html element that is the same size of the image you are spriting once per build and update all css references programmatically. 3. Use a sprite generating program to generate a sprite image for you once per release and hand insert the new css class / image into your project. 4. Other methods? I prefer two as it requires very little hand-coding and image editing.

    Read the article

  • Opportunities for Partners with Oracle in the Public Sector - Live Webcast, January 18th

    - by Paulo Folgado
    LIVE WEBCAST - OPPORTUNITIES FOR PARTNERS WITH ORACLE IN THE PUBLIC SECTORTUESDAY, JANUARY 18th, 2011Learn about Oracle's industry strategy for the Public Sector and resources available to partners.  Each webcast will include information and answers to questions from Oracle's public sector leadership for that region.AGENDA·         Oracle's Public Sector Industry Strategy·         Oracle Partner Network (OPN) Overview·         Public Sector Knowledge Zone·         Specialization·         Oracle Validated Integration·         Solution Catalog·         How to engage with Oracle·         Questions and AnswersWEBCAST SCHEDULE AND LOGISTICSPlease attend the webcast for your region on Tuesday, January 18th: Region Time Web Conference Details*Please join both the Web and Audio conferences Audio Details Asia / Pacific 10:00 AM Signapore 1:00 PM Sydney 2:00 AM GMT  https://enablement20.webex.com/Session Number: 592 054 744Password: Oracle123 International Toll Free Dial-inConference Code: 2739403Security Pass code: 12345 EuropeMiddle EastAfrica  1:00 PM GMT/London https://enablement20.webex.com/Session Number: 596 548 609Password: Oracle123 International Toll Free Dial-inConference Code: 2739403Security Pass code: 12345 Americas  1:00 PM Eastern10:00 AM Pacific 6:00 PM GMT https://enablement20.webex.com/Session Number: 597 728 102Password: Oracle123 International Toll Free Dial-inConference Code: 2739403Security Pass code: 12345 VISIT THE PUBLIC SECTOR KNOWLEDGE ZONEClick Here to access the Knowledge Zone.  Copyright © 2011, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement      

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >