Search Results

Search found 25414 results on 1017 pages for 'default arguments'.

Page 155/1017 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Case class copy() method abstraction.

    - by Joa Ebert
    I would like to know if it is possible to abstract the copy method of case classes. Basically I have something like sealed trait Op and then something like case class Push(value: Int) extends Op and case class Pop() extends Op. The first problem: A case class without arguments/members does not define a copy method. You can try this in the REPL. scala> case class Foo() defined class Foo scala> Foo().copy() <console>:8: error: value copy is not a member of Foo Foo().copy() ^ scala> case class Foo(x: Int) defined class Foo scala> Foo(0).copy() res1: Foo = Foo(0) Is there a reason why the compiler makes this exception? I think it is rather unituitive and I would expect every case class to define a copy method. The second problem: I have a method def ops: List[Op] and I would like to copy all ops like ops map { _.copy() }. How would I define the copy method in the Op trait? I get a "too many arguments" error if I say def copy(): this.type. However, since all copy() methods have only optional arguments: why is this incorrect? And, how do I do that correct? By making another method named def clone(): this.type and write everywhere def clone() = copy() for all the case classes? I hope not.

    Read the article

  • How am I able to create A List<T> containing a generic Interface?

    - by Conrad Clark
    I have a List which must contain IInteract Objects. But IInteract is a generic interface which requires 2 type arguments. My main idea is iterate through a list of Objects and "Interact" one with another if they didn't interact yet. So i have this object List<IObject> WorldObjects = new List<IObject>(); and this one: private List<IInteract> = new List<IInteract>(); Except I can't compile the last line because IInteract requires 2 type arguments. But I don't know what the arguments are until I add them. I could add interactions between Objects of Type A and A... or Objects of Type B and C. I want to create "Interaction" classes which do something with the "acting" object and the "target" object, but I want them to be independent from the objects... so I could add an Interaction between for instance... "SuperUltraClass" and... an "integer". Am I using the wrong approach?

    Read the article

  • In Javascript, is it true that function aliasing works as long as the function being aliased doesn't

    - by Jian Lin
    In Javascript, if we are aliasing a function, such as in: f = g; f = obj.display; obj.f = foo; all the 3 lines above, they will work as long as the function / method on the right hand side doesn't touch this? Since we are passing in all the arguments, the only way it can mess up is when the function / method on the right uses this? Actually, line 1 is probably ok if g is also a property of window? If g is referencing obj.display, then the same problem is there. In line 2, when obj.display touches this, it is to mean the obj, but when f() is invoked, the this is window, so they are different. In line 3, it is the same: when f() is invoked inside of obj's code, then the this is obj, while foo might be using this to refer to window if it were a property of window. (global function). So line 2 can be written as f = function() { obj.display.apply(obj, arguments) } and line 3: obj.f = function() { foo.apply(window, arguments) } Is this the correct method, and are there there other methods besides this?

    Read the article

  • How do I display a view as if it's the front page via a module?

    - by Justin
    I have a simple view that feeds a home page. I have a custom module that registers some specific URLs in hook_menu that I pass into my module so I can pass them as arguments into the view. I can get the module to display the view all right, but it doesn't use the teaser/is_front view that outputs when I access the home page. I looked through the APIs but I can't seem to figure out how I can output the view via my module as if it's the front page, meaning $is_front is true and the teasers would appear. The reason I'm not passing in the arguments via the URL bar into the view itself is: My argument list is known and finite The argument order is mixed, meaning I will sometimes have /argument1, /argument1/argument2 or just /argument2. I only want to capture the first level URL as an argument for specific, known strings (e.g. I don't want to pass /admin into my view but I do want to pass in /los-angeles, which I register in the menu system via hook_menu in my module) Here are some examples to make this more clear: /admin - loads the admin page /user - loads the login page /boston - passes into the first argument of the view; shows in front/teaser mode / - shows view with no arguments /bread - passes into argument 2 of the view; shows in front/teaser mode /boston/bread - Passes into argument 1 and 2 of the view; shows in front/teaser mode Maybe I'm going about this the wrong way? Or perhaps there is a way to have a module load a view and somehow set front/teaser mode? Details: Drupal 6, PHP 5, MySQL 5, Views, CCK

    Read the article

  • Python - Things one MUST avoid

    - by Anurag Uniyal
    Today I was bitten again by "Mutable default arguments" after many years. I usually don't use mutable default arguments unless needed but I think with time I forgot about that, and today in the application I added tocElements=[] in a pdf generation function's argument list and now 'Table of Content' gets longer and longer after each invocation of "generate pdf" :) My question is what other things should I add to my list of things to MUST avoid? 1 Mutable default arguments 2 import modules always same way e.g. 'from y import x' and 'import x' are totally different things actually they are treated as different modules see http://stackoverflow.com/questions/1459236/module-reimported-if-imported-from-different-path 3 Do not use range in place of lists because range() will become an iterator anyway, so things like this will fail, so wrap it by list myIndexList = [0,1,3] isListSorted = myIndexList == range(3) # will fail in 3.0 isListSorted = myIndexList == list(range(3)) # will not same thing can be mistakenly done with xrange e.g myIndexList == xrange(3). 4 Catching multiple exceptions try: raise KeyError("hmm bug") except KeyError,TypeError: print TypeError It prints "hmm bug", though it is not a bug, it looks like we are catching exceptions of type KeyError,TypeError but instead we are catching KeyError only as variable TypeError, instead use try: raise KeyError("hmm bug") except (KeyError,TypeError): print TypeError

    Read the article

  • Am I mocking this helper function right in my Django test?

    - by CppLearner
    lib.py from django.core.urlresolvers import reverse def render_reverse(f, kwargs): """ kwargs is a dictionary, usually of the form {'args': [cbid]} """ return reverse(f, **kwargs) tests.py from lib import render_reverse, print_ls class LibTest(unittest.TestCase): def test_render_reverse_is_correct(self): #with patch('webclient.apps.codebundles.lib.reverse') as mock_reverse: with patch('django.core.urlresolvers.reverse') as mock_reverse: from lib import render_reverse mock_f = MagicMock(name='f', return_value='dummy_views') mock_kwargs = MagicMock(name='kwargs',return_value={'args':['123']}) mock_reverse.return_value = '/natrium/cb/details/123' response = render_reverse(mock_f(), mock_kwargs()) self.assertTrue('/natrium/cb/details/' in response) But instead, I get File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'dummy_readfile' with arguments '('123',)' and keyword arguments '{}' not found. Why is it calling reverse instead of my mock_reverse (it is looking up my urls.py!!) The author of Mock library Michael Foord did a video cast here (around 9:17), and in the example he passed the mock object request to the view function index. Furthermore, he patched POll and assigned an expected return value. Isn't that what I am doing here? I patched reverse? Thanks.

    Read the article

  • Exception error in Erlang

    - by Jim
    So I've been using Erlang for the last eight hours, and I've spent two of those banging my head against the keyboard trying to figure out the exception error my console keeps returning. I'm writing a dice program to learn erlang. I want it to be able to call from the console through the erlang interpreter. The program accepts a number of dice, and is supposed to generate a list of values. Each value is supposed to be between one and six. I won't bore you with the dozens of individual micro-changes I made to try and fix the problem (random engineering) but I'll post my code and the error. The Source: -module(dice2). -export([d6/1]). d6(1) - random:uniform(6); d6(Numdice) - Result = [], d6(Numdice, [Result]). d6(0, [Finalresult]) - {ok, [Finalresult]}; d6(Numdice, [Result]) - d6(Numdice - 1, [random:uniform(6) | Result]). When I run the program from my console like so... dice2:d6(1). ...I get a random number between one and six like expected. However when I run the same function with any number higher than one as an argument I get the following exception... **exception error: no function clause matching dice2:d6(1, [4|3]) ... I know I I don't have a function with matching arguments but I don't know how to write a function with variable arguments, and a variable number of arguments. I tried modifying the function in question like so.... d6(Numdice, [Result]) - Newresult = [random:uniform(6) | Result], d6(Numdice - 1, Newresult). ... but I got essentially the same error. Anyone know what is going on here?

    Read the article

  • Transform only one axis to log10 scale with ggplot2

    - by daroczig
    I have the following problem: I would like to visualize a discrete and a continuous variable on a boxplot in which the latter has a few extreme high values. This makes the boxplot meaningless (the points and even the "body" of the chart is too small), that is why I would like to show this on a log10 scale. I am aware that I could leave out the extreme values from the visualization, but I am not intended to. Let's see a simple example with diamonds data: m <- ggplot(diamonds, aes(y = price, x = color)) The problem is not serious here, but I hope you could imagine why I would like to see the values at a log10 scale. Let's try it: m + geom_boxplot() + coord_trans(y = "log10") As you can see the y axis is log10 scaled and looks fine but there is a problem with the x axis, which makes the plot very strange. The problem do not occur with scale_log, but this is not an option for me, as I cannot use a custom formatter this way. E.g.: m + geom_boxplot() + scale_y_log10() My question: does anyone know a solution to plot the boxplot with log10 scale on y axis which labels could be freely formatted with a formatter function like in this thread? Editing the question to help answerers based on answers and comments: What I am really after: one log10 transformed axis (y) with not scientific labels. I would like to label it like dollar (formatter=dollar) or any custom format. If I try @hadley's suggestion I get the following warnings: > m + geom_boxplot() + scale_y_log10(formatter=dollar) Warning messages: 1: In max(x) : no non-missing arguments to max; returning -Inf 2: In max(x) : no non-missing arguments to max; returning -Inf 3: In max(x) : no non-missing arguments to max; returning -Inf With an unchanged y axis labels:

    Read the article

  • How to use R's ellipsis feature when writing your own function?

    - by Ryan Thompson
    The R language has a nifty feature for defining functions that can take a variable number of arguments. For example, the function data.frame takes any number of arguments, and each argument becomes the data for a column in the resulting data table. Example usage: > data.frame(letters=c("a", "b", "c"), numbers=c(1,2,3), notes=c("do", "re", "mi")) letters numbers notes 1 a 1 do 2 b 2 re 3 c 3 mi The function's signature includes an ellipsis, like this: function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, stringsAsFactors = default.stringsAsFactors()) { [FUNCTION DEFINITION HERE] } I would like to write a function that does something similar, taking multiple values and consolidating them into a single return value (as well as doing some other processing). In order to do this, I need to figure out how to "unpack" the ... from the function's arguments within the function. I don't know how to do this. The relevant line in the function definition of data.frame is object <- as.list(substitute(list(...)))[-1L], which I can't make any sense of. So how can I convert the ellipsis from the function's signature into, for example, a list? To be more specific, how can I write get_list_from_ellipsis in the code below? my_ellipsis_function(...) { input_list <- get.list.from.ellipsis(...) output_list <- lapply(X=input_list, FUN=do_something_interesting) return(output_list) } my_ellipsis_function(a=1:10,b=11:20,c=21:30)

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • Constructor Overloading

    - by Mark Baker
    Normally when I want to create a class constructor that accepts different types of parameters, I'll use a kludgy overloading principle of not defining any args in the constructor definition: e.g. for an ECEF coordinate class constructor, I want it to accept either $x, $y and $z arguments, or to accept a single array argument containg x, y and z values, or to accept a single LatLong object I'd create a constructor looking something like: function __construct() { // Identify if any arguments have been passed to the constructor if (func_num_args() > 0) { $args = func_get_args(); // Identify the overload constructor required, based on the datatype of the first argument $argType = gettype($args[0]); switch($argType) { case 'array' : // Array of Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromArray'; break; case 'object' : // A LatLong object that needs converting to Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromLatLong'; break; default : // Individual Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromXYZ'; break; } // Call the appropriate overload constructor call_user_func_array(array($this,$overloadConstructor),$args); } } // function __construct() I'm looking at an alternative: to provide a straight constructor with $x, $y and $z as defined arguments, and to provide static methods of createECEFfromArray() and createECEFfromLatLong() that handle all the necessary extraction of x, y and z; then create a new ECEF object using the standard constructor, and return that Which option is cleaner from an OO purists perspective?

    Read the article

  • Extend DOMElement object

    - by Comma
    How could I exdend objects provided with Document Object Model? Seems that there is no way according to this [issue][2]. class Application_Model_XmlSchema extends DOMElement { const ELEMENT_NAME = 'schema'; /** * @var DOMElement */ private $_schema; /** * @param DOMDocument $document * @return void */ public function __construct(DOMDocument $document) { $this->setSchema($document->getElementsByTagName(self::ELEMENT_NAME)->item(0)); } /** * @param DOMElement $schema * @return void */ public function setSchema(DOMElement $schema){ $this->_schema = $schema; } /** * @return DOMElement */ public function getSchema(){ return $this->_schema; } /** * @param string $name * @param array $arguments * @return mixed */ public function __call($name, $arguments) { if (method_exists($this->_schema, $name)) { return call_user_func_array( array($this->_schema, $name), $arguments ); } } } $version = $this->getRequest()->getParam('version', null); $encoding = $this->getRequest()->getParam('encoding', null); $source = 'http://www.w3.org/2001/XMLSchema.xsd'; $document = new DOMDocument($version, $encoding); $document->load($source); $xmlSchema = new Application_Model_XmlSchema($document); $xmlSchema->getAttribute('version'); I got an error: Warning: DOMElement::getAttribute(): Couldn't fetch Application_Model_XmlSchema in C:\Nevermind.php on line newvermind

    Read the article

  • Java reflection appropriateness

    - by jsn
    This may be a fairly subjective question, but maybe not. My application contains a bunch of forms that are displayed to the user at different times. Each form is a class of its own. Typically the user clicks a button, which launches a new form. I have a convenience function that builds these buttons, you call it like this: buildButton( "button text", new SelectionAdapter() { @Override public void widgetSelected( SelectionEvent e ) { showForm( new TasksForm( args... ) ); } } ); I do this dozens of times, and it's really cumbersome having to make a SelectionAdapter every time. Really all I need for the button to know is what class to instantiate when it's clicked and what arguments to give the constructor, so I built a function that I call like this instead: buildButton( "button text", TasksForm.class, args... ); Where args is an arbitrary list of objects that you could use to instantiate TasksForm normally. It uses reflection to get a constructor from the class, match the argument list, and build an instance when it needs to. Most of the time I don't have to pass any arguments to the constructor at all. The downside is obviously that if I'm passing a bad set of arguments, it can't detect that at compilation time, so if it fails, a dialog is displayed at runtime. But it won't normally fail, and it'll be easy to debug if it does. I think this is much cleaner because I come from languages where the use of function and class literals is pretty common. But if you're a normal Java programmer, would seeing this freak you out, or would you appreciate not having to scan a zillion SelectionAdapters?

    Read the article

  • How can I theme custom form(drupal 6.x)

    - by Andrew
    DRUPAL 6.X I have this custom form constructor inside my custom module which is invoke through ajax request. I’m attempting to theme this form with the template file reside in my theme directory. For that matter, I’ve registered my theme inside template.php file which reside in my theme folder. Here’s how this file looks – function my_theme() { return array( 'searchdb' => array( 'arguments' => array('form' => NULL), 'template' => 'searchform', ) ); } And the following is the excerpt of module code – function test_menu() { $my_form['searchdb'] = array( 'title' => 'Search db', 'page callback' => 'get_form', 'page arguments' => array(0), 'access arguments' => array('access content'), 'type' => MENU_CALLBACK, ); return $my_form; } function get_form($formtype){ switch($formtype){ case 'searchdb' : echo drupal_get_form('searchdb'); break; } } function searchdb(){ $form['customer_name'] = array( '#type' => 'textfield', '#title' => t('Customer Name'), '#size' => 50, '#attributes' => array('class' => 'name-textbox'), ); return $form; } As you can imagine, this is not working at all. Just to test if my theme is even registered, I’ve also tested with theme function but, it’s not called. I've checked template file name and form-id(through the outputted html source) and everything seems ok. I would be glad if anyone could point me to the right direction.

    Read the article

  • Vim: How do I tell where a function is defined? (

    - by sixtyfootersdude
    I just installed macvim yesterday and I installed vim latex today. One of the menu items is calling a broken fuction (TeX-Suite -> view). When I click on the menu-time it makes this call: :silent! call Tex_ViewLatex() Question: Where can I find that function? Is there some way to figure out where it is defined? Just for curiosity sake I removed the silent part and ran this: :call Tex_ViewLatex() Which produces: Error detected while processing function Tex_ViewLaTeX: line 34: E121: Undefined variable: s:viewer E116: Invalid arguments for function strlen(s:viewer) E15: Invalid expression: strlen(s:viewer) line 39: E121: Undefined variable: appOpt E15: Invalid expression: 'open '.appOpt.s:viewer.' $*.'.s:target line 79: E121: Undefined variable: execString E116: Invalid arguments for function substitute(execString, '\V$*', mainfname, 'g' ) E15: Invalid expression: substitute(execString, '\V$*', mainfname, 'g') line 80: E121: Undefined variable: execString E116: Invalid arguments for function Tex_Debug line 82: E121: Undefined variable: execString E15: Invalid expression: 'silent! !'.execString Press ENTER or type command to continue I suspect that if I could see the source function I could figure out what inputs are bad or what it is looking for. Thanks.

    Read the article

  • Address bar showing long URL

    - by Abel
    I recently upgraded my hosting account to Deluxe where I can host multiple websites. I added a domain name and created a folder in the root directory giving it the same name as my domain name and uploaded my files. Now when I navigate the site the address bar shows: 'http://mywebsite/mywebsite/default.aspx' I want it to display: 'http://mywebsite/default.aspx' My thinking in creating folders that match the domain names is to keep them somewhat organized; never intended to have my domain names listed twice in the address bar.

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • Install Ubuntu Netbook Edition with Wubi Installer

    - by Matthew Guay
    Ubuntu is one of the most popular versions of Linux, and their Netbook Remix edition is especially attractive for netbook owners.  Here we’ll look at how you can easily try out Ubuntu on your netbook without a CD/DVD drive. Netbooks, along with the growing number of thin, full powered laptops, lack a CD/DVD drive.  Installing software isn’t much of a problem since most programs, whether free or for-pay, are available for download.  Operating systems, however, are usually installed from a disk.  You can easily install Windows 7 from a flash drive with our tutorial, but installing Ubuntu from a USB flash drive is more complicated.  However, using Wubi, a Windows installer for Ubuntu, you can easily install it directly on your netbook and even uninstall it with only a few clicks. Getting Started Download and run the Wubi installer for Ubuntu (link below).  In the installer, select the drive you where you wish to install Ubuntu, the size of the installation (this is the amount dedicated to Ubuntu; under 20Gb should be fine), language, username, and desired password.  Also, from the Desktop environment menu, select Ubuntu Netbook to install the netbook edition.  Click Install when your settings are correct. Wubi will automatically download the selected version of Ubuntu and install it on your computer. Windows Firewall may ask if you want to unblock Wubi; select your network and click Allow access. The download will take around an hour on broadband, depending on your internet connection speed.  Once the download is completed, it will automatically install to your computer.  If you’d prefer to have everything downloaded before you start the install, download the ISO of Ubuntu Netbook edition (link below) and save it in the same folder as Wubi. Then, when you run Wubi, select the netbook edition as before and click Install.  Wubi will verify that your download is valid, and will then proceed to install from the downloaded ISO.  This install will only take about 10 minutes. Once the install is finished you will be asked to reboot your computer.  Save anything else you’re working on, and then reboot to finish setting up Ubuntu on your netbook. When your computer reboots, select Ubuntu at the boot screen.  Wubi leaves the default OS as Windows 7, so if you don’t select anything it will boot into Windows 7 after a few seconds. Ubuntu will automatically finish the install when you boot into it the first time.  This took about 12 minutes in our test. When the setup is finished, your netbook will reboot one more time.  Remember again to select Ubuntu at the boot screen.  You’ll then see a second boot screen; press your Enter key to select the default.   Ubuntu only took less than a minute to boot in our test.  When you see the login screen, select your name and enter your password you setup in Wubi.  Now you’re ready to start exploring Ubuntu Netbook Remix. Using Ubuntu Netbook Remix Ubuntu Netbook Remix offers a simple, full-screen interface to take the best advantage of netbooks’ small screens.  Pre-installed applications are displayed in the application launcher, and are organized by category.  Click once to open an application. The first screen on the application launcher shows your favorite programs.  If you’d like to add another application to the favorites pane, click the plus sign beside its icon. Your files from Windows are still accessible from Ubuntu Netbook Remix.  From the home screen, select Files & Folders on the left menu, and then click the icon that says something like 100GB Filesystem under the Volumes section. Now you’ll be able to see all of your files from Windows.  Your user files such as documents, music, and pictures should be located in Documents and Settings in a folder with your user name. You can also easily install a variety of free applications via the Software Installer. Connecting to the internet is also easy, as Ubuntu Netbook Remix automatically recognized the WiFi adaptor on our test netbook, a Samsung N150.  To connect to a wireless network, click the wireless icon on the top right of the screen and select the network’s name from the list. And, if you’d like to customize your screen, right-click on the application launcher and select Change desktop background. Choose a background picture you’d like. Now you’ll see it through your application launcher.  Nice! Most applications are opened full-screen.  You can close them by clicking the x on the right of the program’s name. You can also switch to other applications from their icons on the top left.  Open the home screen by clicking the Ubuntu logo in the far left. Changing Boot Options By default, Wubi will leave Windows as the default operating system, and will give you 10 seconds at boot to choose to boot into Ubuntu.  To change this, boot into Windows and enter Advanced system settings in your start menu search. In this dialog, click Settings under Startup and Recovery. From this dialog, you can select the default operating system and the time to display list of operating systems.  You can enter a lower number to make the boot screen appear for less time. And if you’d rather make Ubuntu the default operating system, select it from the drop-down list.   Uninstalling Ubuntu Netbook Remix If you decide you don’t want to keep Ubuntu Netbook Remix on your computer, you can uninstall it just like you uninstall any normal application.  Boot your computer into Windows, open Control Panel, click Uninstall a Program, and enter ubuntu in the search box.  Select it, and click Uninstall. Click Uninstall at the prompt.  Ubuntu uninstalls very quickly, and removes the entry from the bootloader as well, so your computer is just like it was before you installed it.   Conclusion Ubuntu Netbook Remix offers an attractive Linux interface for netbooks.  We enjoyed trying it out, and found it much more user-friendly than most Linux distros.  And with the Wubi installer, you can install it risk-free and try it out on your netbook.  Or, if you’d like to try out another alternate netbook operating system, check out our article on Jolicloud, another new OS for netbooks. Links Download Wubi Installer for Windows Download Ubuntu Netbook Edition Similar Articles Productive Geek Tips Easily Install Ubuntu Linux with Windows Using the Wubi InstallerInstall VMware Tools on Ubuntu Edgy EftHow to install Spotify in Ubuntu 9.10 using WineInstalling PHP5 and Apache on UbuntuInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • I have Oracle SQL Developer Installed, Now What?

    - by thatjeffsmith
    If you’re here because you downloaded a copy of Oracle SQL Developer and now you need help connecting to a database, then you’re in the right place. I’ll show you what you need to get up and going so you can finish your homework, teach yourself Oracle database, or get ready for that job interview. You’ll need about 30 minutes to set everything up…and about 5 years to become proficient with Oracle Oracle Database come with SQL Developer but SQL Developer doesn’t include a database If you install Oracle database, it includes a copy of SQL Developer. If you’re running that copy of SQL Developer, please take a second to upgrade now, as it is WAY out of date. But I’m here to talk to the folks that have downloaded SQL Developer and want to know what to do next. You’ve got it running. You see this ‘Connection’ dialog, and… Where am I connecting to, and who as? You NEED a database Installing SQL Developer does not give you a database. So you’re going to need to install Oracle and create a database, or connect to a database that is already up and running somewhere. Basically you need to know the following: where is this database, what’s it called, and what port is the listener running on? The Default Connection properties in SQL Developer These default settings CAN work, but ONLY if you have installed Oracle Database Express Edition (XE). Localhost is a network alias for 127.0.0.1 which is an IP address that maps to the ‘local’ machine, or the machine you are reading this blog post on. The listener is a service that runs on the server and handles connections for the databases on that machine. You can run a database without a listener and you can run a listener without a database, but you can’t connect to a database on a different server unless both that database and listener are up and running. Each listener ‘listens’ on one or more ports, you need to know the port number for each connection. The default port is 1521, but 1522 is often pretty common. I know all of this sounds very complicated Oracle is a very sophisticated piece of software. It’s not analogous to downloading a mobile phone app and and using it 10 seconds later. It’s not like installing Office/Access either – it requires services, environment setup, kernel tweaks, etc. However. Normally an administrator will setup and install Oracle, create the database, and configure the listener for everyone else to use. They’ll often also setup the connection details for everyone via a ‘TNSNAMES.ORA’ file. This file contains a list of database connection details for folks to browse – kind of like an Oracle database phoneboook. If someone has given you a TNSNAMES.ORA file, or setup your machine to have access to a TNSNAMES file, then you can just switch to the ‘TNS’ connection type, and use the dropdown to select the database you want to connect to. Then you don’t have to worry about the server names, database names, and the port numbers. ORCL – that sounds promising! ORCL is the default SID when creating a new database with the Database Creation Assistant (DBCA). It’s just me, and I need help! No administrator, no database, no nothing. What do you do? You have a few options: Buy a copy of Oracle and download, install, and create a database Download and install XE (FREE!) Download, import, and run our Developer Days Hands-on-Lab (FREE!) If you’re a student (or anyone else) with little to no experience with Oracle, then I recommend the third option. Oracle Technology Network Developer Day: Hands-on Database Application Development Lab The OTN lab runs on a A Virtual Box image which contains: 11gR2 Enterprise Edition copy of Oracle a database and listener running for you to connect to lots of demo data for you to play with SQL Developer installed and ready to connect Some browser based labs you can step through to learn Oracle You download the image, you download and install Virtual Box (also FREE!), then you IMPORT the image you previously downloaded. You then ‘Start’ the image. It will boot a copy of Oracle Enterprise Linux (OEL), start your database, and all that jazz. You can then start up and run SQL Developer inside the image OR you can connect to the database running on the image using the copy of SQL Developer you installed on your host machine. Setup Port Forwarding to Make It Easy to Connect From Your Host When you start the image, it will be assigned an IP address. Depending on what network adapter you select in the image preferences, you may get something that can get out to the internet from your image, something your host machine can see and connect to, or something that kind of just lives out there in a vacuum. You want to avoid the ‘vacuum’ option – unless you’re OK with running SQL Developer inside the Linux image. Open the Virtual Box image properties and go to the Networking options. We’re going to setup port forwarding. This will tell your machine that anything that happens on port 1521 (the default Oracle Listener port), should just go to the image’s port 1521. So I can connect to ‘localhost’ and it will magically get transferred to the image that is running. Oracle Virtual Box Port Forwarding 1521 listener database Now You Just Need a Username and Password The default passwords on this image are all ‘oracle’ – so you can connect as SYS, HR, or whatever – just use ‘oracle’ as the password. The Linux passowrds are all ‘oracle’ too, so you can login as ‘root’ or as ‘oracle’ in the Linux desktop. Connect! Connect as HR to your Oracle database running on the OTN Developer Days Virtual Box image If you’re connecting to someone else’s database, you need to ask the person that manages that environment to create for you an account. Don’t try to ‘guess’ or ‘figure out’ what the username and password is. Introduce yourself, explain your situation, and ask kindly for access. This is your first test – can you connect? I know it’s hard to get started with Oracle. There are however many things we offer to make this easier. You’ll need to do a bit of RTM first though. Once you know what’s required, you will be much more likely to succeed. Of course, if you need help, you know where to find me

    Read the article

  • MySQL – How to Find mysqld.exe with Command Prompt – Fix: ‘mysql’ is not recognized as an internal or external command, operable program or batch file

    - by Pinal Dave
    One of the most popular question I get after watching my MySQL courses on Pluralsight is that beginning users are not able to find where they have installed MySQL Server. The error they receive is as follows when they type mysqld command on their default command line. ‘mysql‘ is not recognized as an internal or external command, operable program or batch file. This error comes up if user try to execute mysqld command on default command prompt. The user should execute this command where mysql.exe file exists.  If you are using Windows Explorer you can easily search on your drive mysqld.exe and find the location of the file and execute the above command there. However, if you want to find out with command prompt the location of mysqld.exe file you can follow the direction here. Step 1: Open a command prompt Open command prompt from Start >> Run >> cmd >> enter Step 2: Change directory You need to change the default directory to root directory, hence type cd\ command on the prompt to change the default directory to c:\ . Here we are assuming that you have installed MySQL on your c: drive. If you have installed it on any other drive change the drive to that letter. Step 3: Search Drive Type the command dir mysqld.exe /s /p on the command prompt. It will search your directories and will list the directory where mysqld.exe is located. Step 4: Change Directory Now once again change your command prompt file location to the folder where your mysqld.exe is located. In my case it is located here in folder C:\Program Files\MySQL\MySQL Server 5.6\bin hence I will run following command: cd C:\Program Files\MySQL\MySQL Server 5.6\bin . Step 5: Execute mysqld.exe Now you can once again mysqld.exe on your command prompt. You can use this method to search pretty much any file with the help of command prompt. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • RequestValidation Changes in ASP.NET 4.0

    - by Rick Strahl
    There’s been a change in the way the ValidateRequest attribute on WebForms works in ASP.NET 4.0. I noticed this today while updating a post on my WebLog all of which contain raw HTML and so all pretty much trigger request validation. I recently upgraded this app from ASP.NET 2.0 to 4.0 and it’s now failing to update posts. At first this was difficult to track down because of custom error handling in my app – the custom error handler traps the exception and logs it with only basic error information so the full detail of the error was initially hidden. After some more experimentation in development mode the error that occurs is the typical ASP.NET validate request error (‘A potentially dangerous Request.Form value was detetected…’) which looks like this in ASP.NET 4.0: At first when I got this I was real perplexed as I didn’t read the entire error message and because my page does have: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="NewEntry.aspx.cs" Inherits="Westwind.WebLog.NewEntry" MasterPageFile="~/App_Templates/Standard/AdminMaster.master" ValidateRequest="false" EnableEventValidation="false" EnableViewState="false" %> WTF? ValidateRequest would seem like it should be enough, but alas in ASP.NET 4.0 apparently that setting alone is no longer enough. Reading the fine print in the error explains that you need to explicitly set the requestValidationMode for the application back to V2.0 in web.config: <httpRuntime executionTimeout="300" requestValidationMode="2.0" /> Kudos for the ASP.NET team for putting up a nice error message that tells me how to fix this problem, but excuse me why the heck would you change this behavior to require an explicit override to an optional and by default disabled page level switch? You’ve just made a relatively simple fix to a solution a nasty morass of hard to discover configuration settings??? The original way this worked was perfectly discoverable via attributes in the page. Now you can set this setting in the page and get completely unexpected behavior and you are required to set what effectively amounts to a backwards compatibility flag in the configuration file. It turns out the real reason for the .config flag is that the request validation behavior has moved from WebForms pipeline down into the entire ASP.NET/IIS request pipeline and is now applied against all requests. Here’s what the breaking changes page from Microsoft says about it: The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing. In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request. As a result, request validation errors might now occur for requests that previously did not trigger errors. To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file: <httpRuntime requestValidationMode="2.0" /> However, we recommend that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors. Ok, so ValidateRequest of the form still works as it always has but it’s actually the ASP.NET Event Pipeline, not WebForms that’s throwing the above exception as request validation is applied to every request that hits the pipeline. Creating the runtime override removes the HttpRuntime checking and restores the WebForms only behavior. That fixes my immediate problem but still leaves me wondering especially given the vague wording of the above explanation. One thing that’s missing in the description is above is one important detail: The request validation is applied only to application/x-www-form-urlencoded POST content not to all inbound POST data. When I first read this this freaked me out because it sounds like literally ANY request hitting the pipeline is affected. To make sure this is not really so I created a quick handler: public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World <hr>" + context.Request.Form.ToString()); } public bool IsReusable { get { return false; } } } and called it with Fiddler by posting some XML to the handler using a default form-urlencoded POST content type: and sure enough – hitting the handler also causes the request validation error and 500 server response. Changing the content type to text/xml effectively fixes the problem however, bypassing the request validation filter so Web Services/AJAX handlers and custom modules/handlers that implement custom protocols aren’t affected as long as they work with special input content types. It also looks that multipart encoding does not trigger event validation of the runtime either so this request also works fine: POST http://rasnote/weblog/handler1.ashx HTTP/1.1 Content-Type: multipart/form-data; boundary=------7cf2a327f01ae User-Agent: West Wind Internet Protocols 5.53 Host: rasnote Content-Length: 40 Pragma: no-cache <xml>asdasd</xml>--------7cf2a327f01ae *That* probably should trigger event validation – since it is a potential HTML form submission, but it doesn’t. New Runtime Feature, Global Scope Only? Ok, so request validation is now a runtime feature but sadly it’s a feature that’s scoped to the ASP.NET Runtime – effective scope to the entire running application/app domain. You can still manually force validation using Request.ValidateInput() which gives you the option to do this in code, but that realistically will only work with the requestValidationMode set to V2.0 as well since the 4.0 mode auto-fires before code ever gets a chance to intercept the call. Given all that, the new setting in ASP.NET 4.0 seems to limit options and makes things more difficult and less flexible. Of course Microsoft gets to say ASP.NET is more secure by default because of it but what good is that if you have to turn off this flag the very first time you need to allow one single request that bypasses request validation??? This is really shortsighted design… <sigh>© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • How to set conditional activation to taskflows?

    - by shantala.sankeshwar(at)oracle.com
    This article describes implementing conditional activation to taskflows.Use Case Description Suppose we have a taskflow dropped as region on a page & this region is enclosed in a popup .By default when the page is loaded the respective region also gets loaded.Hence a region model needs to provide a viewId whenever one is requested.  A consequence of this is the TaskFlowRegionModel always has to initialize its task flow and execute the task flow's default activity in order to determine a viewId, even if the region is not visible on the page.This can lead to unnecessary performance overhead of executing task flow to generate viewIds for regions that are never visible. In order to increase the performance,we need to set the taskflow bindings activation property to 'conditional'.Below described is a simple usecase that shows how exactly we can set the conditional activations to taskflow bindings.Steps:1.Create an ADF Fusion web ApplicationView image 2.Create Business components for Emp tableView image3.Create a view criteria where deptno=:some_bind_variableView image4.Generate EmpViewImpl.java file & write the below code.Then expose this to client interface.    public void filterEmpRecords(Number deptNo){            // Code to filter the deptnos         ensureVariableManager().setVariableValue("some_bind_variable",  deptNo);        this.applyViewCriteria(this.getViewCriteria("EmpViewCriteria"));        this.executeQuery();       }5.Create an ADF Taskflow with page fragements & drop the above method on the taskflow6.Also drop the view activity(showEmp.jsff) .Define control flow case from the above method activity to the view activity.Set the method activity as default activityView image7.Create  main.jspx page & drop the above taskflow as region on this pageView image8.Surround the region with the dialog & surround the dialog with the popup(id is Popup1)9.Drop the commandButton on the above page & insert af:showPopupBehavior inside the commandButton:<af:commandButton text="show popup" id="cb1"><af:showPopupBehavior popupId="::Popup1"/></af:commandButton>10.Now if we execute this main page ,we will notice that the method action gets called even before the popup is launched.We can avoid this this by setting the activation property of the taskflow to conditional11.Goto the bindings of the above main page & select the taskflow binding ,set its activation property to 'conditional' & active property to Boolean value #{Somebean.popupVisible}.By default its value should be false.View image12.We need to set the above Boolean value to true only when the popup is launched.This can be achieved by inserting setPropertyListener inside the popup:<af:setPropertyListener from="true" to="#{Somebean.popupVisible}" type="popupFetch"/>13.Now if we run the page,we will notice that the method action is not called & only when we click on 'show popup' button the method action gets called.

    Read the article

  • The blogs I turn upto&hellip;

    - by DAXShekhar
    I turn around the pages of Following blogs  Satya Srikant (who is also my brother ;)) http://geekswithblogs.net/ssmantha/Default.aspx   Kamal http://kamalblogs.wordpress.com   Prabhat Samuel http://geekswithblogs.net/Prabhats/Default.aspx   Vanya Kashperuk http://kashperuk.blogspot.com/ Mfp’s two cents http://blogs.msdn.com/mfp/   ……. more to add to list … ;)

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >