Search Results

Search found 7165 results on 287 pages for 'template specialization'.

Page 82/287 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Using mod_rewrite for a Virtual Filesystem vs. Real Filesystem

    - by philtune
    I started working in a department that uses a CMS in which the entire "filesystem" is like this: create a named file or folder - this file is given a unique node (ex. 2345) as well as a default "filename" (ex. /WelcomeToOurProductsPage) and apply a template assign one or more aliases to the file for a URL redirect (ex. /home-page-products - can also be accessed by /home-page-products.aspx) A new Rewrite command is written on the .htaccess file for each and every alias Server accesses either /WelcomeToOurProductsPage or /home-page-products and redirects to something like /template.aspx?tmp=2&node=2345 (here I'm guessing what it does - I only have front-end access for now - but I have enough clues to strongly assume) Node 2345 grabs content stored in a SQL Db and applies it to the template. Note: There are no actual files being created on the filesystem. It's entirely virtual. This is probably a very common thing, but since I have never run across this kind of system before two months ago, I wanted to explain it in case it isn't common. I'm not a fan at all of ASP or closed-sourced systems, so it may be that this is common practice for ASP developers. My question, that has taken far too long to ask, is: what are the benefits of this kind of system, as opposed to creating an actual file hierarchy? Are there any drawbacks to having every single file server call redirected? To having the .htaccess file hold rewrite rules for every single alias?

    Read the article

  • Positioned element adding to total div height [migrated]

    - by Max
    I have a 800 x 600 rotatable image with forward and back buttons repositioned to the sides. The height of the div is suppose to be 600px, but the actual height of the div was pushed to 690, including the height of the button image. And the div was blocking a row of clickable menu on top. So I made the div height to 518px and moved top -75px to get the real dimension I want. But I feel dirty doing this... Is there a correct way to do this? Or is this workaround more or less correct? Below is the code. Thanks! <div class="Content Wide" id="LayoutColumn1"> <div style=" width: 980px; height: 518px; display: block; position: relative; float: left;"> <a href="#" onClick="prev();"><img src="/template/img/next_button.png" style="position: relative; top: 200px; left: 5px; z-index: 2;"></a> <a href="/chef-special/" id="mainLink"><img src="/template/img/chef_special_large.png" id="main" style="margin: 0 0 0 50px; position: relative; float: left; top: -75px; z-index: 1;"></a> <a href="#" onClick="next();"><img src="/template/img/next_button.png" style="position: relative; top: 200px; left: 787px; z-index: 2;"></a> </div> </div>

    Read the article

  • Making an XSL stylesheet work with paged XML

    - by fudgey
    First off, here is the situation. I'm using a guild hosting site that allows you to input the URL to an XSL file and another input for the XML. All well and good when all of the XML you want is contained in one file. My problem is this Game Roster XML which is paginated... look near the bottom of the file and you will find a <page_links> section that contains a pager written in HTML with links to /xml?page=2 etc. Since the guild hosting site is set up to only process one XML page I can't get to the other XML pages. So, I can only think of two solutions, but I have no idea how to get started Set up a php page that combines all XML pages into one, then output that file. I can then use this URL in the guild hosting site XSL processor. Somehow combine all the XML files within the XSL stylesheet. I found this question on SO (I don't really understand it, because I don't know what the document($pXml1) is doing), but I don't think it will work since the number of pages will be variable. I think this might be possible by loading the next page until the <members_to> value equals the <members_total>. Any other ideas? I don't know XSL or php that well so any help with code examples would be greatly appreciated. Update: I'm trying method 2 above and here is a snippet of XSLT with which I am having trouble. The first page of the code displays without problems, but the I am having trouble with this xsl:if, or maybe it's the document() statement. Update #2: changed the document to use the string & concat functions, but it's still not working. <xsl:template name="morepages"> <xsl:param name="page">1</xsl:param> <xsl:param name="url"> <xsl:value-of select="concat(SuperGroup/profule_url,'/xml?page=')"/> </xsl:param> <xsl:if test="document(string(concat($url,$page)))/SuperGroup/members_to &lt; document(string(concat($url,$page)))/SuperGroup/members_total"> <xsl:for-each select="document(string(concat($url,$page + 1)))/SuperGroup/members/members_node"> <xsl:call-template name="addrow" /> </xsl:for-each> <!-- Increment page index--> <xsl:call-template name="morepages"> <xsl:with-param name="page" select="$page + 1"/> </xsl:call-template> </xsl:if> </xsl:template>

    Read the article

  • How do I construct a Django reverse/url using query args?

    - by Andrew Dalke
    I have URLs like http://example.com/depict?smiles=CO&width=200&height=200 (and with several other optional arguments) My urls.py contains: urlpatterns = patterns('', (r'^$', 'cansmi.index'), (r'^cansmi$', 'cansmi.cansmi'), url(r'^depict$', cyclops.django.depict, name="cyclops-depict"), I can go to that URL and get the 200x200 PNG that was constructed, so I know that part works. In my template from the "cansmi.cansmi" response I want to construct a URL for the named template "cyclops-depict" given some query parameters. I thought I could do {% url cyclops-depict smiles=input_smiles width=200 height=200 %} where "input_smiles" is an input to the template via a form submission. In this case it's the string "CO" and I thought it would create a URL like the one at top. This template fails with a TemplateSyntaxError: Caught an exception while rendering: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': u'CO', 'height': 200, 'width': 200}' not found. This is a rather common error message both here on StackOverflow and elsewhere. In every case I found, people were using them with parameters in the URL path regexp, which is not the case I have where the parameters go into the query. That means I'm doing it wrong. How do I do it right? That is, I want to construct the full URL, including path and query parameters, using something in the template. For reference, % python manage.py shell Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.urlresolvers import reverse >>> reverse("cyclops-depict", kwargs=dict()) '/depict' >>> reverse("cyclops-depict", kwargs=dict(smiles="CO")) Traceback (most recent call last): File "<console>", line 1, in <module> File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 356, in reverse *args, **kwargs))) File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 302, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': 'CO'}' not found.

    Read the article

  • problem with jquery : minipulating val() property of element

    - by P4ul
    Hi, Please help! I have some form elements in a div on a page: <div id="box"> <div id="template"> <div> <label for="username">Username</label> <input type="text" class="username" name="username[]" value="" / > <label for="hostname">hostname</label> <input type="text" name="hostname[]" value=""> </div> </div> </div> using jquery I would like to take a copy of #template, manipulate the values of the inputs and insert it after #template so the result would look something like: <div id="box"> <div id="template"> <div> <label for="username">Username</label> <input type="text" class="username" name="username[]" value="" / > <label for="hostname">hostname</label> <input type="text" name="hostname[]" value=""> </div> </div> <div> <label for="username">Username</label> <input type="text" class="username" name="username[]" value="paul" / > <label for="hostname">hostname</label> <input type="text" name="hostname[]" value="paul"> </div> </div> I am probably going about this the wrong way but the following test bit of javascript code run in firebug on the page does not seem to change the values of the inputs. var cp = $('#template').clone(); cp.children().children().each( function(i,d){ if( d.localName == 'INPUT' ){ $(d).val('paul'); //.css('background-color', 'red'); } }); $("#box").append(cp.html()); although if I uncomment "//.css('background-color', 'red');" the inputs will turn red.

    Read the article

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • Is it ok to dynamic cast "this" as a return value?

    - by Panayiotis Karabassis
    This is more of a design question. I have a template class, and I want to add extra methods to it depending on the template type. To practice the DRY principle, I have come up with this pattern (definitions intentionally omitted): template <class T> class BaseVector: public boost::array<T, 3> { protected: BaseVector<T>(const T x, const T y, const T z); public: bool operator == (const Vector<T> &other) const; Vector<T> operator + (const Vector<T> &other) const; Vector<T> operator - (const Vector<T> &other) const; Vector<T> &operator += (const Vector<T> &other) { (*this)[0] += other[0]; (*this)[1] += other[1]; (*this)[2] += other[2]; return *dynamic_cast<Vector<T> * const>(this); } } template <class T> class Vector : public BaseVector<T> { public: Vector<T>(const T x, const T y, const T z) : BaseVector<T>(x, y, z) { } }; template <> class Vector<double> : public BaseVector<double> { public: Vector<double>(const double x, const double y, const double z); Vector<double>(const Vector<int> &other); double norm() const; }; I intend BaseVector to be nothing more than an implementation detail. This works, but I am concerned about operator+=. My question is: is the dynamic cast of the this pointer a code smell? Is there a better way to achieve what I am trying to do (avoid code duplication, and unnecessary casts in the user code)? Or am I safe since, the BaseVector constructor is private?

    Read the article

  • can this code be broken?

    - by user105165
    Consider the below html string <p>This is a paragraph tag</p> <font>This is a font tag</font> <div>This is a div tag</div> <span>This is a span tag</span> This string is processed to tokanize the text found in it and we get 2 results as below 1) Token Array : $tokenArray == array( 'This is a paragraph tag', 'This is a div tag', '<font>This is a font tag</font>', '<span>This is a span tag</span>' ); 2) Tokenized template : $templateString == "<p>{0}</p>{2}<div>{1}</div>{3}"; If you observe, the sequence of the text strings segments from the original HTML strings is different from the tokenized template The PHP code below is used to order the tokenized template and accordingly the token array to match the original html string class CreateTemplates { public static $tokenArray = array(); public static $tokenArrayNew = array(); function foo($templateString,$tokenArray) { CreateTemplates::$tokenArray = $tokenArray; $ptn = "/{[0-9]*}*/"; // Search Pattern from the template string $templateString = preg_replace_callback($ptn,array(&$this, 'callbackhandler') ,$templateString); // function call return $templateString; } // Function defination private static function callbackhandler($matches) { static $newArr = array(); static $cnt; $tokenArray = CreateTemplates::$tokenArray; array_push($newArr, $matches[0]); CreateTemplates::$tokenArrayNew[count($newArr)] = $tokenArray[substr($matches[0],1,(strlen($matches[0])-2))]; $cnt = count($newArr)-1; return '{'.$cnt.'}'; } // function ends } // class ends Final output is (ordered template and token array) $tokenArray == array('This is a paragraph tag', '<font>This is a font tag</font>', 'This is a div tag', '<span>This is a span tag</span>' ); $templateString == "<p>{0}</p>{1}<div>{2}</div>{3}"; Which is the expected result. Now, I am not confident whether this is the right way to achieve this. I want to see how this code can be broken or not. Under what conditions will this code break? (important) Is there any other way to achieve this? (less important)

    Read the article

  • Where are the function literals in c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • C++: Templates for static functions?

    - by Rosarch
    I have a static Utils class. I want certain methods to be templated, but not the entire class. How do I do this? This fails: #pragma once #include <string> using std::string; class Utils { private: template<class InputIterator, class Predicate> static set<char> findAll_if_rec(InputIterator begin, InputIterator end, Predicate pred, set<char> result); public: static void PrintLine(const string& line, int tabLevel = 0); static string getTabs(int tabLevel); template<class InputIterator, class Predicate> static set<char> Utils::findAll_if(InputIterator begin, InputIterator end, Predicate pred); }; Error: utils.h(10): error C2143: syntax error : missing ';' before '<' utils.h(10): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int utils.h(10): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int utils.h(10): error C2238: unexpected token(s) preceding ';' utils.h(10): error C2988: unrecognizable template declaration/definition utils.h(10): error C2059: syntax error : '<' What am I doing wrong? What is the correct syntax for this? Incidentally, I'd like to templatize the return value, too. So instead of: template<class InputIterator, class Predicate> static set<char> findAll_if_rec(InputIterator begin, InputIterator end, Predicate pred, set<char> result); I'd have: template<class return_t, class InputIterator, class Predicate> static return_t findAll_if_rec(InputIterator begin, InputIterator end, Predicate pred, set<char> result); How would I specify that: 1) return_t must be a set of some sort 2) InputIterator must be an iterator 3) InputIterator's type must work with return_t's type. Thanks.

    Read the article

  • go programming POST FormValue can't be printed

    - by poor_programmer
    Before I being a bit of background, I am very new to go programming language. I am running go on Win 7, latest go package installer for windows. I'm not good at coding but I do like some challenge of learning a new language. I wanted to start learn Erlang but found go very interesting based on the GO I/O videos in youtube. I'm having problem with capturing POST form values in GO. I spend three hours yesterday to get go to print a POST form value in the browser and failed miserably. I don't know what I'm doing wrong, can anyone point me to the right direction? I can easily do this in another language like C#, PHP, VB, ASP, Rails etc. I have search the entire interweb and haven't found a working sample. Below is my sample code. Here is Index.html page {{ define "title" }}Homepage{{ end }} {{ define "content" }} <h1>My Homepage</h1> <p>Hello, and welcome to my homepage!</p> <form method="POST" action="/"> <p> Enter your name : <input type="text" name="username"> </P> <p> <button>Go</button> </form> <br /><br /> {{ end }} Here is the base page <!DOCTYPE html> <html lang="en"> <head> <title>{{ template "title" . }}</title> </head> <body> <section id="contents"> {{ template "content" . }} </section> <footer id="footer"> My homepage 2012 copy </footer> </body> </html> now some go code package main import ( "fmt" "http" "strings" "html/template" ) var index = template.Must(template.ParseFiles( "templates/_base.html", "templates/index.html", )) func GeneralHandler(w http.ResponseWriter, r *http.Request) { index.Execute(w, nil) if r.Method == "POST" { a := r.FormValue("username") fmt.Fprintf(w, "hi %s!",a); //<-- this variable does not rendered in the browser!!! } } func helloHandler(w http.ResponseWriter, r *http.Request) { remPartOfURL := r.URL.Path[len("/hello/"):] fmt.Fprintf(w, "Hello %s!", remPartOfURL) } func main() { http.HandleFunc("/", GeneralHandler) http.HandleFunc("/hello/", helloHandler) http.ListenAndServe("localhost:81", nil) } Thanks! PS: Very tedious to add four space before every line of code in stackoverflow especially when you are copy pasting. Didn't find it very user friendly or is there an easier way?

    Read the article

  • Where are the function literals c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one.

    Read the article

  • Are there any CMS editors out there which users can populate locked down HTML templates with content

    - by Deep
    Hi there, We work in email marketing, creating HTML/TEXT emails for clients. In essence we design HTML email templates for our clients. Clients then post us content (via a form) to populate these templates before we send them out. Right now we do this manually, basically cutting and pasting the content from their submitted form into the relevant parts of the template, which is time consuming and particularly mind-numbing. What we're looking for (and have so far been unable to find) is a simple system which will allow us to capture this client content in a sort of WYSIWYG HTML format. Basically they populate a locked down version of the template, entering text where necessary, before submitting to us. This is our most basic requirement, and a friend of mine kindly demo'd a proof of concept here: http://advantageone.co.uk/mbe/ Note: If you click on a text area in the body of the template, an editor pop ups. Now what we are looking for a CMS editor out there which can be easily adapted to do the above and the following for our end clients? User login View previously submitted campaigns that they have created and edit these Create new - selecting from template (assigned to their user/client id), perhaps being able to add new rows to the template. And have these HTML templates locked down so they can only edit what they're allowed too (like in the demo above), and perhaps make some areas required. Perhaps have a simple workflow or approval built in Allow us to lock submitted campaigns after a point so they can't be further edited, and as administrators view all campaigns from all users Be so incredibly simple, with any extraneous functionality switched off Essentially an extremley simple stripped down CMS, but we use the outputted HTML for sending out as an email, rather than publishing onto the web. Now to the actual dilemma: we're looking for something really simple, and the above sounds like a CMS. But we haven't been able to find anything that already does, or can be easily adapted to do this. Everything is either too complex, or simple and inflexible. We're sure there must be something off the shelf available, rather than us coding something ourselves. But we've kind of got stuck. Does anyone know of a system, or could recommend a system that can do the above out of the box, or with a few days tweaking? Forgive me if this is a little disjointed, if I'm being incredibly dopey and there is something out there please let me know! Kind regards, Dp.

    Read the article

  • What are the Rails best practices for javascript templates in restful/resourceful controllers?

    - by numbers1311407
    First, 2 common (basic) approaches: # returning from some FoosController method respond_to do |format| # 1. render the javascript directly format.js { render :json => @foo.to_json } # 2. render the default template, say update.js.erb format.js { render } end # in update.js.erb $('#foo').html("<%= escape_javascript(render(@foo)) %>") These are obviously simple cases but I wanted to illustrate what I'm talking about. I believe that these are also the cases expected by the default responder in rails 3 (either the action-named default template or calling to_#{format} on the resource.) The Issues With 1, you have total flexibility on the view side with no worries about the template, but you have to manipulate the DOM directly via javascript. You lose access to helpers, partials, etc. With 2, you have partials and helpers at your disposal, but you're tied to the one template (by default at least). All your views that make JS calls to FoosController use the same template, which isn't exactly flexible. Three Other Approaches (none really satisfactory) 1.) Escape partials/helpers I need into javascript beforehand, then inserting them into the page after, using string replacement to tailor them to the results returned (subbing in name, id, etc). 2.) Put view logic in the templates. For example, looking for a particular DOM element and doing one thing if it exists, another if it does not. 3.) Put logic in the controller to render different templates. For example, in a polymorphic belongs to where update might be called for either comments/foo or posts/foo, rendering commnts/foos/update.js.erb versus posts/foos/update.js.erb. I've used all of these (and probably others I'm not thinking of). Often in the same app, which leads to confusing code. Are there best practices for this sort of thing? It seems like a common enough use-case that you'd want to call controllers via Ajax actions from different views and expect different things to happen (without having to do tedious things like escaping and string-replacing partials and helpers client side). Any thoughts?

    Read the article

  • Can someone help me install MYSQL server pelase? This is bugging me....

    - by Alex
    $ sudo aptitude install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following NEW packages will be installed: libhtml-template-perl{a} mysql-server mysql-server-5.0{a} mysql-server-core-5.0{a} 0 packages upgraded, 4 newly installed, 0 to remove and 12 not upgraded. Need to get 0B/27.7MB of archives. After unpacking 91.1MB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Preconfiguring packages ... Selecting previously deselected package mysql-server-core-5.0. (Reading database ... 17022 files and directories currently installed.) Unpacking mysql-server-core-5.0 (from .../mysql-server-core-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package mysql-server-5.0. Unpacking mysql-server-5.0 (from .../mysql-server-5.0_5.1.30really5.0.75-0ubuntu10.3_amd64.deb) ... Selecting previously deselected package libhtml-template-perl. Unpacking libhtml-template-perl (from .../libhtml-template-perl_2.9-1_all.deb) ... Selecting previously deselected package mysql-server. Unpacking mysql-server (from .../mysql-server_5.1.30really5.0.75-0ubuntu10.3_all.deb) ... Setting up mysql-server-core-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 Setting up libhtml-template-perl (2.9-1) ... dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up mysql-server-5.0 (5.1.30really5.0.75-0ubuntu10.3) ... * Stopping MySQL database server mysqld [ OK ] /var/lib/dpkg/info/mysql-server-5.0.postinst: line 144: /etc/mysql/conf.d/old_passwords.cnf: No such file or directory dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Writing extended state information... Done Before I installed it, I ran this: sudo aptitude purge mysql-server mysql-server-5.0 This has happened before. I remember before, I did something with dpkg with fixed it.

    Read the article

  • Nagios notifications definitions

    - by Colin
    I am trying to monitor a web server in such a way that I want to search for a particular string on a page via http. The command is defined in command.cfg as follows # 'check_http-mysite command definition' define command { command_name check_http-mysite command_line /usr/lib/nagios/plugins/check_http -H mysite.example.com -s "Some text" } # 'notify-host-by-sms' command definition define command { command_name notify-host-by-sms command_line /usr/bin/send_sms $CONTACTPAGER$ "Nagios - $NOTIFICATIONTYPE$ :Host$HOSTALIAS$ is $HOSTSTATE$ ($OUTPUT$)" } # 'notify-service-by-sms' command definition define command { command_name notify-service-by-sms command_line /usr/bin/send_sms $CONTACTPAGER$ "Nagios - $NOTIFICATIONTYPE$: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ ($OUTPUT$)" } Now if nagios doesn't find "Some text" on the home page mysite.example.com, nagios should notify a contact via sms through the Clickatell http API which I have a script for that that I have tested and found that it works fine. Whenever I change the command definition to search for a string which is not on the page, and restart nagios, I can see on the web interface that the string was not found. What I don't understand is why isn't the notification sent though I have defined the host, hostgroup, contact, contactgroup and service and so forth. What I'm I missing, these are my definitions, In my web access through the cgi I can see that I have notifications have been defined and enabled though I don't get both email and sms notifications during hard status changes. host.cfg define host { use generic-host host_name HAL alias IBM-1 address xxx.xxx.xxx.xxx check_command check_http-mysite } *hostgroups_nagios2.cfg* # my website define hostgroup{ hostgroup_name my-servers alias All My Servers members HAL } *contacts_nagios2.cfg* define contact { contact_name colin alias Colin Y service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email,notify-service-by-sms host_notification_commands notify-host-by-email,notify-host-by-sms email [email protected] pager +254xxxxxxxxx } define contactgroup{ contactgroup_name site_admin alias Site Administrator members colin } *services_nagios2.cfg* # check for particular string in page via http define service { hostgroup_name my-servers service_description STRING CHECK check_command check_http-mysite use generic-service notification_interval 0 ; set > 0 if you want to be renotified contacts colin contact_groups site_admin } Could someone please tell me where I'm going wrong. Here are the generic-host and generic-service definitions *generic-service_nagios2.cfg* # generic service template definition define service{ name generic-service ; The 'name' of this service template active_checks_enabled 1 ; Active service checks are enabled passive_checks_enabled 1 ; Passive service checks are enabled/accepted parallelize_check 1 ; Active service checks should be parallelized (disabling this can lead to major performance problems) obsess_over_service 1 ; We should obsess over this service (if necessary) check_freshness 0 ; Default is to NOT check service 'freshness' notifications_enabled 1 ; Service notifications are enabled event_handler_enabled 1 ; Service event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts notification_interval 0 ; Only send notifications on status change by default. is_volatile 0 check_period 24x7 normal_check_interval 5 retry_check_interval 1 max_check_attempts 4 notification_period 24x7 notification_options w,u,c,r contact_groups site_admin register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL SERVICE, JUST A TEMPLATE! } *generic-host_nagios2.cfg* define host{ name generic-host ; The name of this host template notifications_enabled 1 ; Host notifications are enabled event_handler_enabled 1 ; Host event handler is enabled flap_detection_enabled 1 ; Flap detection is enabled failure_prediction_enabled 1 ; Failure prediction is enabled process_perf_data 1 ; Process performance data retain_status_information 1 ; Retain status information across program restarts retain_nonstatus_information 1 ; Retain non-status information across program restarts max_check_attempts 10 notification_interval 0 notification_period 24x7 notification_options d,u,r contact_groups site_admin register 1 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL HOST, JUST A TEMPLATE! }

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • jQuery Templates on Microsoft Ajax CDN

    - by Stephen Walther
    The beta version of the jQuery Templates plugin is now hosted on the Microsoft Ajax CDN. You can start using the jQuery Templates plugin in your application by referencing both jQuery 1.4.2 and jQuery Templates from the CDN. Here are the two script tags that you will want to use when developing an application: <script type="text/javascript" src=”http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js”></script> <script type="text/javascript" src=”http://ajax.microsoft.com/ajax/jquery.templates/beta1/jquery.tmpl.js”></script> In addition, minified versions of both files are available from the CDN: <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery.templates/beta1/jquery.tmpl.min.js"></script> Here’s a full code sample of using jQuery Templates from the CDN to display pictures of cats from Flickr: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Cats</title> <style type="text/css"> html { background-color:Orange; } #catBox div { width:250px; height:250px; border:solid 1px black; background-color:White; margin:5px; padding:5px; float:left; } #catBox img { width:200px; height: 200px; } </style> </head> <body> <h1>Cat Photos!</h1> <div id="catBox"></div> <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://ajax.microsoft.com/ajax/jquery.templates/beta1/jquery.tmpl.min.js"></script> <script id="catTemplate" type="text/x-jquery-tmpl"> <div> <b>${title}</b> <br /> <img src="${media.m}" /> </div> </script> <script type="text/javascript"> var url = "http://api.flickr.com/services/feeds/groups_pool.gne?id=44124373027@N01&lang=en-us&format=json&jsoncallback=?"; // Grab some flickr images of cats $.getJSON(url, function (data) { // Format the data using the catTemplate template $("#catTemplate").tmpl(data.items).appendTo("#catBox"); }); </script> </body> </html> This page displays a list of cats retrieved from Flickr: Notice that the cat pictures are retrieved and rendered with just a few lines of code: var url = "http://api.flickr.com/services/feeds/groups_pool.gne?id=44124373027@N01&lang=en-us&format=json&jsoncallback=?"; // Grab some flickr images of cats $.getJSON(url, function (data) { // Format the data using the catTemplate template $("#catTemplate").tmpl(data.items).appendTo("#catBox"); }); The final line of code, the one that calls the tmpl() method, uses the Templates plugin to render the cat photos in a template named catTemplate. The catTemplate template is contained within a SCRIPT element with type="text/x-jquery-tmpl". The jQuery Templates plugin is an “official” jQuery plugin which will be included in jQuery 1.5 (the next major release of jQuery). You can read the full documentation for the plugin at the jQuery website: http://api.jquery.com/category/plugins/templates/ The jQuery Templates plugin is still beta so we would really appreciate your feedback on the plugin. Let us know if you use the Templates plugin in your website.

    Read the article

  • Announcing the ASP.NET and Web Tools 2012.2 Release Candidate

    - by ScottGu
    This week the ASP.NET and Visual Web Developer teams delivered the Release Candidate of the ASP.NET and Web Tools 2012.2 update (formerly ASP.NET Fall 2012 Update BUILD Prerelease). This update extends the existing ASP.NET runtime and adds new web tooling to Visual Studio 2012. Whether you use Web Forms, MVC, Web API, or any other ASP.NET technology, there is something cool in this update for you. You can download and install the RC today: http://www.asp.net/vnext. Great ASP.NET Enhancements This update adds new ASP.NET templates and features, including: New ASP.NET MVC templates. Creating Facebook applications just became easier using the new Facebook Application template. In just a few easy steps you can create a Facebook application that gets data from the logged in user as well as integrates with their friends. A new Single Page Application template allows developers to build interactive client-side web apps using Knockout, jQuery, and ASP.NET Web API. Real-time communication support with ASP.NET SignalR.  This enables you to easily take advantage of the new WebSocket support in .NET 4.5, while also automatically degrading to long-polling and other protocols for older clients.  If you haven’t tried SignalR yet you should – it is awesome. New ASP.NET Web API functionality, including support for OData, integrated tracing, and automatically generating help page documentation for your API. New ASP.NET Friendly URL functionality. This new feature makes it very easy for Web Forms developers to generate cleaner looking URLs (without the .aspx extension). The Friendly URLs feature also makes it easier for developers to add mobile support to their applications with support for mobile .ASPX pages and  supporting switching between desktop and mobile views. It can be used with existing ASP.NET v4.0 applications. Visual Studio 2012 Web publishing enhancements. Web site projects now have the same publish experience as web application projects (including to Windows Azure Web Sites), and you can selectively publish files, see the differences between local and remote files, and update local to remote files or vice versa. Visual Studio 2012 Page Inspector enhancements. JavaScript selection mapping is now supported, and you can CSS updates in real-time. Visual Studio 2012 editor support for Knockout IntelliSense and pasting JSON as a .NET class (which makes it even easier to consume Web APIs from others). Visual Studio 2012 Project Template updates, including the latest versions of jQuery, jQuery UI, jQuery Validation, Modernirz, Knockout and more… How it is delivered You can download and install an integrated setup that contains the above enhancements today from http://www.asp.net/vnext. The new runtime functionality is delivered to ASP.NET via additional NuGet packages. This means that installing this update does not make any changes to the existing ASP.NET binaries, and thus does not cause any compatibility issues with existing projects. New projects will contain the new functionality and existing projects can be updated with the new NuGet packages. Summary Web development is changing, and ASP.NET is rapidly delivering new capabilities to developers that help them take full advantage of new capabilities.  The ASP.NET and Web Tools 2012.2 update installs in minutes without altering the current ASP.NET run time components. For a complete description see the Release Notes. Next week I plan to publish a tutorial showing how to build a cool Facebook application using the new Facebook template. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • Creating shapes on the fly

    - by Bertrand Le Roy
    Most Orchard shapes get created from part drivers, but they are a lot more versatile than that. They can actually be created from pretty much anywhere, including from templates. One example can be found in the Layout.cshtml file of the ThemeMachine theme: WorkContext.Layout.Footer .Add(New.BadgeOfHonor(), "5"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this is really doing is create a new shape called BadgeOfHonor and injecting it into the Footer global zone (that has not yet been defined, which in itself is quite awesome) with an ordering rank of "5". We can actually come up with something simpler, if we want to render the shape inline instead of sending it into a zone: @Display(New.BadgeOfHonor()) Now let's try something a little more elaborate and create a new shape for displaying a date and time: @Display(New.DateTime(date: DateTime.Now, format: "d/M/yyyy")) For the moment, this throws a "Shape type DateTime not found" exception because the system has no clue how to render a shape called "DateTime" yet. The BadgeOfHonor shape above was rendering something because there is a template for it in the theme: Themes/ThethemeMachine/Views/BadgeOfHonor.cshtml. We need to provide a template for our new shape to get rendered. Let's add a DateTime.cshtml file into our theme's Views folder in order to make the exception go away: Hi, I'm a date time shape. Now we're just missing one thing. Instead of displaying some static text, which is not very interesting, we can display the actual time that got passed into the shape's dynamic constructor. Those parameters will get added to the template's Model, so they are easy to retrieve: @(((DateTime)Model.date).ToString(Model.format)) Now that may remind you a little of WebForm's user controls. That's a fair comparison, except that these shapes are much more flexible (you can add properties on the fly as necessary), and that the actual rendering is decoupled from the "control". For example, any theme can override the template for a shape, you can use alternates, wrappers, etc. Most importantly, there is no lifecycle and protocol abstraction like there was in WebForms. I think this is a real improvement over previous attempts at similar things.

    Read the article

  • Oracle Unified Method (OUM) 6.1

    - by user714714
    ORACLE® UNIFIED METHOD RELEASE 6.1 Oracle’s Full Lifecycle Methodfor Deploying Oracle-Based Business Solutions About | Release | Access | Previous Announcements About Oracle is evolving the Oracle® Unified Method (OUM) to achieve the vision of supporting the entire Enterprise IT Lifecycle, including support for the successful implementation of every Oracle product. OUM replaces Legacy Methods, such as AIM Advantage, AIM for Business Flows, EMM Advantage, PeopleSoft's Compass, and Siebel's Results Roadmap. OUM provides an implementation approach that is rapid, broadly adaptive, and business-focused. OUM includes a comprehensive project and program management framework and materials to support Oracle's growing focus on enterprise-level IT strategy, architecture, and governance. Release OUM release 6.1 provides support for Application Implementation, Cloud Application Services Implementation, and Software Upgrade projects as well as the complete range of technology projects including Business Intelligence (BI), Enterprise Security, WebCenter, Service-Oriented Architecture (SOA), Application Integration Architecture (AIA), Business Process Management (BPM), Enterprise Integration, and Custom Software. Detailed techniques and tool guidance are provided, including a supplemental guide related to Oracle Tutor and UPK. This release features: Project Manager and Consultant views provide quick access to material relevant to each role OUM Cloud Application Services Implementation Approach Solution Delivery Guide 3.0 and Project Workplan Template OUM Microsoft Project Workplan Template and User's Guide updated to facilitate review and removal of out-of-scope Activities and Tasks MC.050 Application Setup Template available in Microsoft Excel format in addition to Microsoft Word format BT.070 Abbreviated Project Management Framework Presentation Template Envision Examples for Enterprise Organization Structures (BA.020), Enterprise Business Context Diagram (BA.045), and High-Level Use Cases (BA.060) Implement Examples for System Context Diagram (RD.005), Business Use Case Model (RA.015), Use Case Model (RA.023), MoSCoW List (RD.045), and Analysis Specification (AN.100) Home Page drop-down menu allows access to the method by Role, Supplemental Guidance, Method Repository, or View For a comprehensive list of features and enhancements, refer to the "What's New" page of the Method Pack. Upcoming releases will provide expanded support for Oracle's Enterprise Application suites including product-suite specific materials and guidance for tailoring OUM to support various engagement types. Access Oracle Customers Oracle customers may obtain copies of the method for their internal use – including guidelines, templates, and tailored work breakdown structure – by contracting with Oracle for a consulting engagement of two weeks or longer and meeting some additional minimum criteria. Customers, who have a signed consulting contract with Oracle and meet the engagement qualification criteria, are permitted to download the current release of OUM for their perpetual use. They may also obtain subsequent releases published during a renewable, three-year access period. Training courses are also available to these customers. Contact your local Oracle Sales Representative about enrolling in the OUM Customer Program. Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold Partners OPN Diamond, Platinum, and Gold Partners are able to access the OUM method pack, training courses, and collateral from the OPN Portal at no additional cost: Go to the OPN Portal at partner.oracle.com. Select "Sign In / Register for Account". Sign In. From the Product Resources section, select "Applications". From the Applications page, locate and select the "Oracle Unified Method" link. From the Oracle Unified Method Knowledge Zone, locate the "I want to:" section. From the I want to: section, locate and select "Implement Solutions". From the Implement Solution page, locate the "Best Practices" section. Locate and select the "Download Oracle Unified Method (OUM)" link. Previous Announcements Oracle Unified Method (OUM) Release 6.1 Oracle Unified Method (OUM) Release 6.0 Oracle Unified Method (OUM) Release 5.6 Oracle Unified Method (OUM) Release 5.5 Oracle Unified Method (OUM) Release 5.4 Oracle EMM Advantage Retired Retirement of Oracle EMM Advantage Planned for December 01, 2011

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Overriding the Pager rendering in Orchard

    - by Bertrand Le Roy
    The Pager shape that is used in Orchard to render pagination is one of those shapes that are built in code rather than in a Razor template. This can make it a little more confusing to override, but nothing is impossible. If we look at the Pager method in CoreShapes, here is what we see: [Shape] public IHtmlString Pager(dynamic Shape, dynamic Display) { Shape.Metadata.Alternates.Clear(); Shape.Metadata.Type = "Pager_Links"; return Display(Shape); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The Shape attribute signals a shape method. All it does is remove all alternates that may exist and replace the type of the shape with “Pager_Links”. In turn, this shape method is rather large and complicated, but it renders as a set of smaller shapes: a List with a “pager” class, and under that Pager_First, Pager_Previous, Pager_Gap, for each page a Pager_Link or a Pager_Current, then Pager_Gap, Pager_Next and Pager_Last. Each of these shapes can be displayed or not depending on the properties of the pager. Each can also be overridden with a Razor template. This can be done by dropping a file into the Views folder of your theme. For example, if you want the current page to appear between square braces, you could drop this Pager-CurrentPage.cshtml into your views folder: <span>[@Model.Value]</span> This overrides the original shape method, which was this: [Shape] public IHtmlString Pager_CurrentPage(HtmlHelper Html, dynamic Display, object Value) { var tagBuilder = new TagBuilder("span"); tagBuilder.InnerHtml = Html.Encode(Value is string ? (string)Value : Display(Value)); return MvcHtmlString.Create(tagBuilder.ToString()); } And here is what it would look like: Now what if we want to completely hide the pager if there is only one page? Well, the easiest way to do that is to override the Pager shape by dropping the following into the Views folder of your theme: @{ if (Model.TotalItemCount > Model.PageSize) { Model.Metadata.Alternates.Clear(); Model.Metadata.Type = "Pager_Links"; @Display(Model) } } And that’s it. The code in this template just adds a check for the number of items to display (in a template, Model is the shape) and only displays the Pager_Links shape if it knows that there’s going to be more than one page.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >