Search Results

Search found 2819 results on 113 pages for 'conditional expressions'.

Page 104/113 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Drop down table and jquery

    - by Marcelo
    Hi, I'm trying to make a drop down table using jQuery, with a similar code like here: (from the topic: Conditional simple drop down list?) <body> <div id="myQuestions"> <select id="QuestionOptions"> <option value="A">Question A</option> <option value="B">Question B</option> </select> </div> <div id="myAnswers"> <div id="A" style="display: none;"> <div id="QuestionC"> <p>Here is an example question C.</p> </div> <div id="QuestionD"> <select id="QuestionOptionsD"> <option value="G">Question G</option> <option value="H">Question H</option> </select> </div> </div> <div id="B" style="display: none;"> <div id="QuestionE"> <p>Here is an example question E.</p> </div> <div id="QuestionF"> <select id="QuestionOptionsF"> <option value="I">Question I</option> <option value="J">Question J</option> </select> </div> </div> </div> And the jQuery part $(function () { $('#QuestionOptions').change(function () { $('#myAnswers > div').hide(); $('#myAnswers').find('#' + $(this).val()).show(); }); }); My problem is, when I finish to table the part of "myQuestions", and start to table the part of "myAnswers", the dynamic part of the table doesn't work. In this case, the myAnswers part won't be hidden, it'll be shown since the beginning. I tried to put everything in one table, then I tried to create a different table for myQuestions, then another table for myAnswers and it didn't work. Does anyone know where am I mistaking ? Sorry for any mistake in English, I'm not a native speaker. Thanks in advance.

    Read the article

  • Do You Really Know Your Programming Languages?

    - by Kristopher Johnson
    I am often amazed at how little some of my colleagues know or care about their craft. Something that constantly frustrates me is that people don't want to learn any more than they need to about the programming languages they use every day. Many programmers seem content to learn some pidgin sub-dialect, and stick with that. If they see a keyword or construct that they aren't familiar with, they'll complain that the code is "tricky." What would you think of a civil engineer who shied away from calculus because it had "all those tricky math symbols?" I'm not suggesting that we all need to become "language lawyers." But if you make your living as a programmer, and claim to be a competent user of language X, then I think at a minimum you should know the following: Do you know the keywords of the language and what they do? What are the valid syntactic forms? How are memory, files, and other operating system resources managed? Where is the official language specification and library reference for the language? The last one is the one that really gets me. Many programmers seem to have no idea that there is a "specification" or "standard" for any particular language. I still talk to people who think that Microsoft invented C++, and that if a program doesn't compile under VC6, it's not a valid C++ program. Programmers these days have it easy when it comes to obtaining specs. Newer languages like C#, Java, Python, Ruby, etc. all have their documentation available for free from the vendors' web sites. Older languages and platforms often have standards controlled by standards bodies that demand payment for specs, but even that shouldn't be a deterrent: the C++ standard is available from ISO for $30 (and why am I the only person I know who has a copy?). Programming is hard enough even when you do know the language. If you don't, I don't see how you have a chance. What do the rest of you think? Am I right, or should we all be content with the typical level of programming language expertise? Update: Several great comments here. Thanks. A couple of people hit on something that I didn't think about: What really irks me is not the lack of knowledge, but the lack of curiosity and willingness to learn. It seems some people don't have any time to hone their craft, but they have plenty of time to write lots of bad code. And I don't expect people to be able to recite a list of keywords or EBNF expressions, but I do expect that when they see some code, they should have some inkling of what it does. Few people have complete knowledge of every dark corner of their language or platform, but everyone should at least know enough that when they see something unfamiliar, they will know how to get whatever additional information they need to understand it.

    Read the article

  • How to use Facebook graph API to retrieve fan photos uploaded to wall of fan page?

    - by Joe
    I am creating an external photo gallery using PHP and the Facebook graph API. It pulls thumbnails as well as the large image from albums on our Facebook Fan Page. Everything works perfect, except I'm only able to retrieve photos that an ADMIN posts to our page. (graph.facebook.com/myalbumid/photos) Is there a way to use graph api to load publicy uploaded photos from fans? I want to retrieve the pictures from the "Photos from" album, but trying to get the ID for the graph query is not like other albums... it looks like this: http://www.facebook.com/media/set/?set=o.116860675007039 Another note: The only way i've come close to retreiving this data is by using the "feed" option.. ie: graph.facebook.com/pageid/feed EDIT: This is about as far as I could get- it works, but has certain issues stated below. Maybe someone could expand on this, or provide a better solution. (Using FB PHP SDK) <?php require_once ('config.php'); // get all photos for album $photos = $facebook->api("/YourID/tagged"); $maxitem =10; $count = 0; foreach($photos['data'] as $photo) { if ($photo['type'] == "photo"): echo "<img src='{$photo['picture']}' />", "<br />"; endif; $count+= 1; if ($count >= "$maxitem") break; } ?> Issues with this: 1) The fact that I don't know a method for graph querying specific "types" of Tags, I had to run a conditional statement to display photos. 2) You cannot effectively use the "?limit=#" with this, because as I said the "tagged" query contains all types (photo, video, and status). So if you are going for a photo gallery and wish to avoid running an entire query by using ?limit, you will lose images. 3) The only content that shows up in the "tagged" query is from people that are not Admins of the page. This isn't the end of the world, but I don't understand why Facebook wouldn't allow yourself to be shown in this data as long as you posted it "as yourself" and not as the page.

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • PHP Regex: How to match anything except a pattern between two tags

    - by Ryan
    Hello, I am attempting to match a string which is composed of HTML. Basically it is an image gallery so there is a lot of similarity in the string. There are a lot of <dl> tags in the string, but I am looking to match the last <dl>(.?)+</dl> combo that comes before a </div>. The way I've devised to do this is to make sure that there aren't any <dl's inside the <dl></dl> combo I'm matching. I don't care what else is there, including other tags and line breaks. I decided I had to do it with regular expressions because I can't predict how long this substring will be or anything that's inside it. Here is my current regex that only returns me an array with two NULL indicies: preg_match_all('/<dl((?!<dl).)+<\/dl>(?=<\/div>)/', $foo, $bar) As you can see I use negative lookahead to try and see if there is another <dl> within this one. I've also tried negative lookbehind here with the same results. I've also tried using +? instead of just + to no avail. Keep in mind that there's no pattern <dl><dl></dl> or anything, but that my regex is either matching the first <dl> and the last </dl> or nothing at all. Now I realize . won't match line breaks but I've tried anything I could imagine there and it still either provides me with the NULL indicies or nearly the whole string (from the very first occurance of <dl to </dl></div>, which includes several other occurances of <dl>, exactly what I didn't want). I honestly don't know what I'm doing incorrectly. Thanks for your help! I've spent over an hour just trying to straighten out this one problem and it's about driven me to pulling my hair out.

    Read the article

  • div "top" bug IE and everything else. Big problem

    - by Victor
    Hi everyone. I am new in CSS so please help me in this problem. I hope to describe it wright. I am making div named content where my site content is. I made it with z-index:-1; so an image to be over this div. But in Chrome, FF and safari, content became inactive. I cant select text , click on link and write in the forms. So I tried with positive states in the z-index but IE don't know what this means. Damn. So I decided to make conditional div. Here is the code: .content { background:#FFF; width:990px; position:relative; float:left; top:50px; } .content_IE { background:#FFF; width:990px; position:relative; float:left; top: 50px; z-index:-1; } and here is the HTML: <!--[if IE 7]> <div class="content_IE" style="height:750px;"> <![endif]--> <div class="content" style="height:550px;"> Everything is fine with the z-index but the problem is that if there is no top in .content class everything looks fine in IE but there is no space in the other browsers. If i put back the top:50px; there onother 50px like padding in the .content_IE class. I mean that the page looks like I've put top:50px; and padding-top=50px;. I've try everything like margin-top:-50px; padding-top:-50px; and stuff like this but I am still in the circle. It look fine only if there is no top option in .content class. Please help.

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • XML serialization options in .NET

    - by Borek
    I'm building a service that returns an XML (no SOAP, no ATOM, just plain old XML). Say that I have my domain objects already filled with data and just need to transform them to the XML format. What options do I have on .NET? Requirements: The transformation is not 1:1. Say that I have an Address property of type Address with nested properties like Line1, City, Postcode etc. This may need to result in an XML like <xaddr city="...">Line1, Postcode</xaddr>, i.e. quite different. Some XML elements/attributes are conditional, for example, if a Customer is under 18, the XML needs to contain some additional information. I only need to serialize the objects to XML, the other direction (XML to objects) is not important Some technologies, i.e. Data Contracts use .NET attributes. Other means of configuration (external XML config, buddy classes etc.) would be a plus. Here are the options as I see them as the moment. Corrections / additions will be very welcome. String concatenation - forget it, it was a joke :) Linq 2 XML - complete control but quite a lot of hand written code, would need good suite of unit tests View engines in ASP.NET MVC (or even Web Forms theoretically), the logic being in controllers. It's a question how to structure it, I can have simple rules engine in my controller(s) and one view template per each possible output, or have the decision logic directly in the template. Both have upsides and downsides. XML Serialization - I'm not sure about the flexibility here Data Contracts from WCF - not sure about the flexibility either, plus would they work in a simple ASP.NET MVC app (non-WCF service)? Are they a super-set of the standard XML serialization now? If it exists, some XML-to-object mapper. The more I think about it the more I think I'm looking for something like this but I couldn't find anything appropriate. Any comments / other options?

    Read the article

  • Issues with subnavigation menus changing positions when using CSS in IE7

    - by Jacinda Littlefield
    The subnavigation menus (located just below the blue tabbed navigation) are showing up in a different position in IE7--they display correctly in Firefox and IE8: https://www.diservio.com/newsite/vehicle/auto-insurance.html I created a separate IE7 CSS file and added conditional comments in the HTML. Here are the properties I modified: topnav {padding-bottom: 10px;} subnavbg h3 {margin: -663px 0 0 -340px;} subnavmenu ul {margin: -663px 0 0 -340px;} leftsubnav {MARGIN: -601px 0 0 -550px;} Here's a portion of what the HTML looks like--all of the divs are nested inside the main div #container (not displayed): <div id="subplacement"> <div id="maincontent"> ... <h1>Auto Insurance</h1> <p>At Tony DiServio Insurance we know how important it is for you to protect yourself and your loved ones when you get behind the wheel.</p> </div> </div> ... <div id="subnavmenu"> <ul> <li><a href="auto-insurance.html" id="autolink" title="Auto Insurance Link"><span>Auto</span></a></li> <li><a href="motorcycle-insurance.html" id="cyclelink" title="Motorcycle Insurance Link"><span>Motorcycle</span></a></li> <li><a href="boat-insurance.html" id="boatlink" title="Boat Insurance Link"><span>Boat</span></a></li> </ul> </div> <div id="leftsubnav"> <ul> <li><a href="auto-coverage.html" id="coverlink" title="Coverage Link"><span>Coverage</span></a></li> </ul> </div> </div> The submenus are also bounced into different positions on the Home Vehicle page and Vehicle Auto Insurance Auto Coverage page. I can't figure out why. Any suggestions on what I need to fix for IE7?

    Read the article

  • How do I create a loop based off this array?

    - by dmanexe
    I'm trying to process this array, first testing for the presence of a check, then extrapolating the data from quantity to return a valid price. Here's the input for fixed amounts of items, with no variable quantity. <input type="checkbox" name="measure[<?=$item->id?>][checked]" value="<?=$item->id?>"> <input type="hidden" name="measure[<?=$item->id?>][quantity]" value="1" /> Here's the inputs for variable amounts of items. <input type="checkbox" name="measure[<?=$item->id?>][checked]" value="<?=$item->id?>"> <input class="item_mult" value="0" type="text" name="measure[<?=$item->id?>][quantity]" /> So, the resulting array is multidimensional. Here's an output: Array ( [1] => Array ( [quantity] => 1 ) [2] => Array ( [quantity] => 1 ) [3] => Array ( [quantity] => 1 ) ... [14] => Array ( [checked] => 14 [quantity] => 999 ) ) Here's the loop I'm using to take this array and process items checked off the form in the first place. I guess the question essentially boils down to how do I structure my conditional statement to incorporate the multi-dimensional array? foreach($field as $value): if ($value['checked'] == TRUE) { $query = $this->db->get_where('items', array('id' => $value['checked']))->row(); #Test to see if quantity input is present if ($value['quantity'] == TRUE) { $newprice = $value['quantity'] * $query->price; $totals[] = $newprice; } #Just return the base value if not else { $newprice = $query->price; $totals[] = $newprice; } } else { } ?> <p><?=$query->name?> - <?=money_format('%(#10n', $newprice)?></p> <? endforeach; ?>

    Read the article

  • Regular expression from font to span (size and colour) and back (VB.NET)

    - by chapmanio
    Hi, I am looking for a regular expression that can convert my font tags (only with size and colour attributes) into span tags with the relevant inline css. This will be done in VB.NET if that helps at all. I also need a regular expression to go the other way as well. To elaborate below is an example of the conversion I am looking for: <font size="10">some text</font> To then become: <span style="font-size:10px;">some text</span> So converting the tag and putting a "px" at the end of whatever the font size is (I don't need to change/convert the font size, just stick px at the end). The regular expression needs to cope with a font tag that only has a size attribute, only a color attribute, or both: <font size="10">some text</font> <font color="#000000">some text</font> <font size="10" color="#000000">some text</font> <font color="#000000" size="10">some text</font> I also need another regular expression to do the opposite conversion. So for example: <span style="font-size:10px;">some text</span> Will become: <font size="10">some text</font> As before converting the tag but this time removing the "px", I don't need to worry about changing the font size. Again this will also need to cope with the size styling, font styling, and a combination of both: <span style="font-size:10px;">some text</span> <span style="color:#000000;">some text</span> <span style="font-size:10px; color:#000000;">some text</span> <span style="color:#000000; font-size:10px;">some text</span> I apprecitate this is a lot to ask, I am hopeless with regular expressions and need to find a way of making these conversions in my code. Thanks so much to anyone that can/is willing to help me!

    Read the article

  • php regex guitar tab (tabs or tablature, a type of music notation)

    - by John
    I am in the process of creating a guitar tab to rtttl (Ring Tone Text Transfer Language) converter in PHP. In order to prepare a guitar tab for rtttl conversion I first strip out all comments (comments noted by #- and ended with -#), I then have a few lines that set tempo, note the tunning and define multiple instruments (Tempo 120\nDefine Guitar 1\nDefine Bass 1, etc etc) which are stripped out of the tab and set aside for later use. Now I essentially have nothing left except the guitar tabs. Each tab is prefixed with it's instrument name in conjunction with the instrument name noted prior. Some times we have tabs for 2 separate instruments that are linked because they are to be played together, ie a Guitar and a Bass Guitar playing together. Example 1, Standard Guitar Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| Example 2, Conjunction Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| I have considered other methods of identifying the tabs with no solid results. I am hoping that someone who does regular expressions could help me find a way to identify a single guitar tab and if possible also be able to match a tab with multiple instruments linked together. Once the tabs are in an array I will go through them one line at a time and convert them into rtttl lines (exploded at each new line "\n"). I do not want to separate the guitar tabs in the document via explode "\n\n" or something similar because it does not identify the guitar tab, rather, it is identifying the space between the tabs - not on the tabs themselves. I have been messing with this for about a week now and this is the only major hold up I have. Everything else is fairly simple. As of current, I have tried many variations of the regex pattern. Here is one of the most recent test samples: <?php $t = " |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| "; preg_match_all("/^.*?(\\|).*?(\\|)/is",$t,$p); print_r($p); ?> It is also worth noting that inside the tabs, where the dashes and #'s are, you may also have any variation of letters, numbers and punctuation. The beginning of each line marks the tuning of each string with one of the following case insensitive: a,a#,b,c,c#,d,d#,e,f,f#,g or g. Thanks in advance for help with this most difficult problem.

    Read the article

  • JavaScript (via Greasemonkey) failing to set "title" attributes on <a> tags

    - by rjray
    I have the following (fairly) simple JavaScript snippet that I have wired into Greasemonkey. It goes through a page, looks for <a> tags whose href points to tinyurl.com, and adds a "title" attribute that identifies the true destination of the link. Much of the important code comes from an older (unsupported) Greasemonkey script that quits working when the inner component that held the XPath implementation changed. My script: (function() { var providers = new Array(); providers['tinyurl.com'] = function(link, fragment) { // This is mostly taken from the (broken due to XPath component // issues) tinyurl_popup_preview script. link.title = "Loading..."; GM_xmlhttpRequest({ method: 'GET', url: 'http://preview.tinyurl.com/' + fragment, onload: function(res) { var re = res.responseText.match("<blockquote><b>(.+)</b></blockquote>"); if (re) { link.title = re[1].replace(/\<br \/\>/g, "").replace(/&amp;/g, "&"); } else { link.title = "Parsing failed..."; } }, onerror: function() { link.title = "Connection failed..."; } }); }; var uriPattern = /(tinyurl\.com)\/([a-zA-Z0-9]+)/; var aTags = document.getElementsByTagName("a"); for (i = 0; i < aTags.length; i++) { var data = aTags[i].href.match(uriPattern); if (data != null && data.length > 1 && data[2] != "preview") { var source = data[1]; var fragment = data[2]; var link = aTags[i]; aTags[i].addEventListener("mouseover", function() { if (link.title == "") { (providers[source])(link, fragment); } }, false); } } })(); (The reason the "providers" associative array is set up the way it is, is so that I can expand this to cover other URL-shortening services as well.) I have verified that all the various branches of code are being reached correctly, in cases where the link being examined does and does not match the pattern. What isn't happening, is any change to the "title" attribute of the anchor tags. I've watched this via Firebug, thrown alert() calls in left and right, and it just never changes. In a previous iteration all expressions of the form: link.title = "..."; had originally been: link.setAttribute("title", "..."); That didn't work, either. I'm no newbie to JavaScript OR Greasemonkey, but this one has me stumped!

    Read the article

  • Suggest an alternative way to organize/build a database solution.

    - by Hamish Grubijan
    We are using Visual Studio 2010, but this was first conceived with VS2003. I will forward the best suggestions to my team. The current setup almost makes me vomit. It is a C# solution with most projects containing .sql files. Because we support Microsoft, Oracle, and Sybase, and so home-brewed a pre-processor, much like C preprocessor, except that substitutions are performed by a home-brewed C# program without using yacc and tools like that. #ifdefs are used for conditional macro definitions, and yeah - macros are the way this is done. A macro can expand to another macro or two, but this should eventually terminate. Only macros have #ifdef in them - the rest of the SQL-like code just uses these macros. Now, the various configurations: Debug, MNDebug, MNRelease, Release, SQL_APPLY_ALL, SQL_APPLY_MSFT, SQL_APPLY_ORACLE, SQL_APPLY_SYBASE, SQL_BUILD_OUTPUT_ALL, SQL_COMPILE, as well as 2 more. Also: Any CPU, Mixed Platforms, Win32. What drives me nuts is having to configure it correctly as well as choosing the right one out of 12 x 3 = 36 configurations as well as having to substitute database name depending on the type of database: config, main, or gateway. I am thinking that configuration should be reduced to just Debug, Release, and SQL_APPLY. Also, using 0, 1, and 2 seems so 80s ... Finally, I think my intention to build or not to build 3 types of databases for 3 types of vendors should be configured with just a tic tac toe board like: XOX OOX XXX In this case it would mean build MSFT+CONFIG, all SYBASE, and all GATEWAY. Still, the overall thing which uses a text file and a pre-processor and many configurations seems incredibly clunky. It is year 2010 now and someone out there is bound to have a very clean and/or creative tool/solution. The only pro would be that the existing collection of macros has been well tested. Have you ever had to write SQL that would work for several vendors? How did you do it? SqlVars.txt (Every one of 30 users makes a copy of a template and modifies this to suit their needs): // This is the default parameters file and should not be changed. // You can overwrite any of these parameters by copying the appropriate // section to override into SqlVars.txt and providing your own information. //Build types are 0-Config, 1-Main, 2-Gateway BUILD_TYPE=1 REMOVE_COMMENTS=1 // Login information used when applying to a Microsoft SQL server database SQL_APPLY_MSFT_version=SQL2005 SQL_APPLY_MSFT_database=msftdb SQL_APPLY_MSFT_server=ABC SQL_APPLY_MSFT_user=msftusr SQL_APPLY_MSFT_password=msftpwd // Login information used when applying to an Oracle database SQL_APPLY_ORACLE_version=ORACLE10g SQL_APPLY_ORACLE_server=oradb SQL_APPLY_ORACLE_user=orausr SQL_APPLY_ORACLE_password=orapwd // Login information used when applying to a Sybase database SQL_APPLY_SYBASE_version=SYBASE125 SQL_APPLY_SYBASE_database=sybdb SQL_APPLY_SYBASE_server=sybdb SQL_APPLY_SYBASE_user=sybusr SQL_APPLY_SYBASE_password=sybpwd ... (THIS GOES ON)

    Read the article

  • HtmlHelperExtensions are not visible in view mvc3 asp.net

    - by user1299372
    I've added a class for the HTML Custom Extensions: using System; using System.Linq.Expressions; using System.Text; using System.Web.Mvc; using System.Web.Mvc.Html; namespace App.MvcHtmlHelpers { public static class HtmlHelperExtensions { public static MvcHtmlString ComboBox(HtmlHelper html, string name, SelectList items, string selectedValue) { var sb = new StringBuilder(); sb.Append(html.DropDownList(name + "_hidden", items, new { @style = "width: 200px;", @onchange = "$('input#" + name + "').val($(this).val());" })); sb.Append(html.TextBox(name, selectedValue, new { @style = "margin-left: -199px; width: 179px; height: 1.2em; border: 0;" })); return MvcHtmlString.Create(sb.ToString()); } public static MvcHtmlString ComboBoxFor<TModel, TProperty>(HtmlHelper<TModel> html, Expression<Func<TModel, TProperty>> expression, SelectList items) { var me = (MemberExpression)expression.Body; var name = me.Member.Name; var sb = new StringBuilder(); sb.Append(html.DropDownList(name + "_hidden", items, new { @style = "width: 200px;", @onchange = "$('input#" + name + "').val($(this).val());" })); sb.Append(html.TextBoxFor(expression, new { @style = "margin-left: -199px; width: 179px; height: 1.2em; border: 0;" })); return MvcHtmlString.Create(sb.ToString()); } I've also registered it in my site web config: <namespaces> <add namespace="System.Web.Helpers" /> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Web.WebPages" /> <add namespace="App.MvcHtmlHelpers"/> </namespaces> In my view, I import the namespace: <%@ Import Namespace="RSPWebApp.MvcHtmlHelpers" %> But when I go to call it in the view, it doesn't recognize the custom extension. Can someone help me by telling me what I might have missed? Thanks so much in advance! <%:Html.ComboBoxFor(a => a.Street2, streetAddressListItems) %

    Read the article

  • How to reflect over T to build an expression tree for a query?

    - by Alex
    Hi all, I'm trying to build a generic class to work with entities from EF. This class talks to repositories, but it's this class that creates the expressions sent to the repositories. Anyway, I'm just trying to implement one virtual method that will act as a base for common querying. Specifically, it will accept a an int and it only needs to perform a query over the primary key of the entity in question. I've been screwing around with it and I've built a reflection which may or may not work. I say that because I get a NotSupportedException with a message of LINQ to Entities does not recognize the method 'System.Object GetValue(System.Object, System.Object[])' method, and this method cannot be translated into a store expression. So then I tried another approach and it produced the same exception but with the error of The LINQ expression node type 'ArrayIndex' is not supported in LINQ to Entities. I know it's because EF will not parse the expression the way L2S will. Anyway, I'm hopping someone with a bit more experience can point me into the right direction on this. I'm posting the entire class with both attempts I've made. public class Provider<T> where T : class { protected readonly Repository<T> Repository = null; private readonly string TEntityName = typeof(T).Name; [Inject] public Provider( Repository<T> Repository) { this.Repository = Repository; } public virtual void Add( T TEntity) { this.Repository.Insert(TEntity); } public virtual T Get( int PrimaryKey) { // The LINQ expression node type 'ArrayIndex' is not supported in // LINQ to Entities. return this.Repository.Select( t => (((int)(t as EntityObject).EntityKey.EntityKeyValues[0].Value) == PrimaryKey)).Single(); // LINQ to Entities does not recognize the method // 'System.Object GetValue(System.Object, System.Object[])' method, // and this method cannot be translated into a store expression. return this.Repository.Select( t => (((int)t.GetType().GetProperties().Single( p => (p.Name == (this.TEntityName + "Id"))).GetValue(t, null)) == PrimaryKey)).Single(); } public virtual IList<T> GetAll() { return this.Repository.Select().ToList(); } protected virtual void Save() { this.Repository.Update(); } }

    Read the article

  • Code to generate random numbers in C++

    - by user1678927
    Basically I have to write a program to generate random numbers to simulate the rolling of a pair of dice. This program should be constructed in multiple files. The main function should be in one file, the other functions should be in a second source file, and their prototypes should be in a header file. First I write a short function that returns a random value between 1 and 6 to simulate the rolling of a single 6-sided die.Second, i write a function that pretends to roll a pair of dice by calling this function twice. My program starts by asking the user how many rolls should be made. Then I write a function to simulate rolling the dice this many times, keeping a count of exactly how many times the values 2,3,4,5,6,7,8,9,10,11,12(each number is the sum of a pair of dice) occur in an array. Later I write a function to display a small bar chart using these counts that ideally would look something like below for a sample of 144 rolls, where the number of asterisks printed corresponds to the count: 2 3 4 5 6 7 8 9 10 11 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Next, to see how well the random number generator is doing, I write a function to compute the average value rolled. Compare this to the ideal average of 7. Also, print out a small table showing the counts of each roll made by the program, the ideal count based on the frequencies above given the total number of rolls, and the difference between these values in separate columns. This is my incomplete code so far: "Compiler visual studio 2010" int rolling(){ //Function that returns a random value between 1 and 6 rand(unsigned(time(NULL))); int dice = 1 + (rand() %6); return dice; } int roll_dice(int num1,int num2){ //it calls 'rolling function' twice int result1,result2; num1 = rolling(); num2 = rolling(); result1 = num1; result2 = num2; return result1,result2; } int main(void){ int times,i,sum,n1,n2; int c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11;//counters for each sum printf("Please enter how many times you want to roll the dice.\n") scanf_s("%i",&times); I pretend to use counters to count each sum and store the number(the count) in an array. I know i need a loop (for) and some conditional statements (if) but m main problem is to get the values from roll_dice and store them in n1 and n2 so then i can sum them up and store the sum in 'sum'.

    Read the article

  • Hot to implement grails server-side-triggered dialog, or how to break out of update region after AJA

    - by werner5471
    In grails, I use the mechanism below in order to implement what I'd call a conditional server-side-triggered dialog: When a form is submitted, data must first be processed by a controller. Based on the outcome, there must either be a) a modal Yes/No confirmation in front of the "old" screen or b) a redirect to a new controller/view replacing the "old" screen (no confirmation required). So here's my current approach: In the originating view, I have a <g:formRemote name="requestForm" url="[controller:'test', action:'testRequest']", update:"dummyRegion"> and a <span id="dummyRegion"> which is hidden by CSS When submitting the form, the test controller checks if a confirmation is necessary and if so, renders a template with a yui-based dialog including Yes No buttons in front of the old screen (which works fine because the dialog "comes from" the dummyRegion, not overwriting the page). When Yes is pressed, the right other controller & action is called and the old screen is replaced, if No is pressed, the dialog is cancelled and the "old" screen is shown again without the dialog. Works well until here. When submitting the form and test controller sees that NO confirmation is necessary, I would usually directly redirect to the right other controller & action. But the problem is that the corresponding view of that controller does not appear because it is rendered in the invisble dummyRegion as well. So I currently use a GSP template including a javascript redirect which I render instead. However a javascript redirect is often not allowed by the browser and I think it's not a clean solution. So (finally ;-) my question is: How do I get a controller redirect to cause the corresponding view to "break out" of my AJAX dummyRegion, replacing the whole screen again? Or: Do you have a better approach for what I have in mind? But please note that I cannot check on the client side whether the confirmation is necessary, there needs to be a server call! Also I'd like to avoid that the whole page has to be refreshed just for the confirmation dialog to pop up (which would also be possible without AJAX). Thanks for any hints!

    Read the article

  • Friendly way to parse XDocument

    - by Oli
    I have a class that various different XML schemes are created from. I create the various dynamic XDocuments via one (Very long) statement using conditional operators for optional elements and attributes. I now need to convert the XDocuments back to the class but as they are coming from different schemes many elements and sub elements may be optional. The only way I know of doing this is to use a lot of if statements. This approach doesn't seem very LINQ and uses a great deal more code than when I create the XDocument so I wondered if there is a better way to do this? An example would be to get <?xml version="1.0"?> <root xmlns="somenamespace"> <object attribute1="This is Optional" attribute2="This is required"> <element1>Required</element1> <element1>Optional</element1> <List1> Optional List Of Elements </List1> <List2> Required List Of Elements </List2> </object> </root> Into public class Object() { public string Attribute1; public string Attribute2; public string Element1; public string Element2; public List<ListItem1> List1; public List<ListItem2> List2; } In a more LINQ friendly way than this: public bool ParseXDocument(string xml) { XNamespace xn = "somenamespace"; XDocument document = XDocument.Parse(xml); XElement elementRoot = description.Element(xn + "root"); if (elementRoot != null) { //Get Object Element XElement elementObject = elementRoot.Element(xn + "object"); if(elementObject != null) { if(elementObject.Attribute(xn + "attribute1") != null) { Attribute1 = elementObject.Attribute(xn + "attribute1"); } if(elementObject.Attribute(xn + "attribute2") != null) { Attribute2 = elementObject.Attribute(xn + "attribute2"); } else { //This is a required Attribute so return false return false; } //If, If/Elses get deeper and deeper for the next elements and lists etc.... } else { //Object is a required element so return false return false; } } else { //Root is a required element so return false return false; } return true; } Update: Just to clarify the ParseXDocument method is inside the "Object" class. Every time an xml document is received the Object class instance has some or all of it's values updated.

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Thursday, April 15, 2010

    CodePlex Daily Summary for Thursday, April 15, 2010New ProjectsApplication Logging Repository (ALR): The ALR is a light-weight logging framework that allows applications to log events and exceptions to a central repository.Arkane.FileProperties.DSS: Arkane.FileProperties.Dss is a library for parsing the file header of a .DSS file (as used by Olympus digital dictaphone systems) to obtain time, v...B in conTrol project: This project enables controling log-in and locking your workstation automatically, identifyng you bluetooth.DarkBook: DarkBook is a personal library project.Direct2D for Microsoft .Net: Direct2D, DirectWrite and Windows Imaging wrappers for .Net. This library allows to access Direct2D, DirectWrite and Windows Imaging Windows API f...DJ Ware: DJ Ware is an extensible music player with plugin support and innovative features to organize and explore music files. It is developed with C#, WPF...gpsMe: gpsMe is a Windows Mobile 6.x mapping solution allowing to place the user on a personnalized map. The screen requirements are VGA or WVGA but, you ...jErrorLog: jErrorLog is an error logging component for use in DotNet 2.0 or later applications. It can log error messages to any of the following: database, e...KEMET_API: Java Library (open - source). This library is a help to study egyptian hieroglyphs.Meadow: A web site project for a Swedish floorball team called Slackers. Home page built with ASP.NET 2.0, ASP.NET AJAX and SQL Server 2005.Mustang Math: Mustang Math makes it easier for young children to practice basic math facts on the computer. No keyboard or mouse required - just say the answer!...Net.Formats.oEmbed: oEmbed format implementation in c#. oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows...Normlize O/R Mapper: Open source O/RM tool that participates with traditional inheritance object models as well as Hibernate/nHibernate style class shells. As I have t...N-Twill Twitter Client for VB.NET: Proyecto de cliente twitter hecho con la libreria TwitterVB2 y hecho en VB.net 2008.SIQM: Spatial Information Quality Management Toolset TIMETABLEASY Web: Under developmentTweetSharp: TweetSharp is a complete .NET library for micro-blogging platforms that allows you to write short and sweet expressions that fly to Twitter, Yammer...UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application inte...WinForm SharePoint Web Part Manager: The SharePoint Web Part Manager is a WinForm tool using the SharePoint object model that enables developers and power users to add, update, delete,...WoW Character Viewer: View your World of Warcraft character (or anyone else's character), using this application. Written using Visual Basic Express 2008, then ported t...Xrns2XMod: Xrns2XMod converts from Renoise format (xrns) to mod or xm, which are more compatible formats playable from xmplay or vlc.New ReleasesArkane.FileProperties.DSS: 1.0 stable release: Executables and merge module for 1.0. (See documentation.)Bluetooth Radar: Version 2.0: Add IrDA reference for Bluetooth sending using Obex Add Project icon Add Bluetooth detection mode (Auto close application is there is no blueto...BUtil: BUtil 5.0 Alpha: Backup tasks adding.... in progressChronos WPF: Chronos v1.0 RC 1: Chronos v1.0 RC 1. Development will be feature frozen after this release, only bug fixes will be allowed. Updated nRoute assembly to v0.4 (http:...clipShow: Version 2.5: Release that addresses the canonical syntax issues in search discoverd by Tschachim (thanks again!). Also, the play list and play all menu items s...DarkBook: DarkBook alpha: Hi, here comes the alpha version of Darkbook. It has all the functions already but is still in developing. I hope it's helpful for you, at least it...DirectQ: Release 1.8.3a: Improvements to 1.8.2, which will be shortly be removed. This replaces the original 1.8.3 release from earlier today which had some late-breaking ...Effect Custom Tool for Visual Studio: Effect Custom Tool v1.1: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Folder Bookmarks: Folder Bookmarks 1.4.3: This is the latest version of Folder Bookmarks (1.4.3), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...gpsMe: gpsMe v0.3: Required Hardware Windows Mobile 6 .Net Compact Framework 3.5 integrated gps device VGA or WVGA screen (normally works on others)IST435: Lab 4 - Enterprise Level CMS with DotNetNuke: Lab 4 - Enterprise Level CMS with DotNetNukeThis is the "starter kit" that you must base your Lab 4 on. This lab must be completed in-class.Mouse Jiggler: MouseJiggle-1.1: 1.1 release of Mouse Jiggler, now with x64 compatibility and the ability to start jiggling on run with the --jiggle or -j command-line switch.Mustang Math: MustangMath.exe: This is a quick and dirty "0.1" prototype to demonstrate the speech recognition idea. It starts asking you questions automatically on launch and k...MvcContrib: a Codeplex Foundation project: 2.0.36.0 for MVC2 (RTW): Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcC...Nito.LINQ: Beta (v0.3): New features for this release: Several new supported platforms (see below). PDBs that are source-indexed to the appropriate CodePlex changeset. ...OpenIdPortableArea: 0.1.0.2 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...PokeIn Comet Ajax Library: PokeIn Sample with Library v0.2: New version of PokeIn library with sample. v0.2 There are new features in this release and no bug detected yet.Project Tru Tiên: Elements-test V1-fix (v2): Là EL test được fix tiếp theo bản fix V1, tạm gọi đây là bản fix V2 của ELtest Trong bản fix này EL được fix thêm vụ Quest, Quest chỉnh sửa đúng t...Rule 18 - Love your clipboard: Rule 18: This is the third public beta for the first version of Rule 18. This version has been updated to support Visual Studio 2010 RTM and .NET 4.0 RTM. ...SevenZipSharp: SevenZipSharp 0.62: Added: Extraction from SFX archives. Now it is possible to unrar RAR self-extractors, unzip ZIP self-extractors, etc. Extraction from DOC, XLS, (...SharePoint Labs: SPLab3001A-FRA-Level200: SPLab3001A-FRA-Level200 This SharePoint Lab will teach the persistence object layer that SharePoint uses to centraly store configuration data and o...TTXPathNavigator: TTXPathNavigator for VS2010: Version for Visual Studio 2010turing machine simulator: SDS: SDS documentVecDraw: VecDraw_0.2.25: Alpha release for test purposesWinForm SharePoint Web Part Manager: Beta 1: First release of the WinForm SharePoitn web part manager toolXrns2XMod: Xrns2XMod 0.5.1: Mod and XM conversion format - No sample data conversion at momentZip Solution: ZipSolution 5.3: Features: 1. Added WaitMsec for visual studio support with getting access to files in post build event; 2. Added ShowTextInToolbars to app.config ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRasFacebook Developer Toolkit

    Read the article

  • SQL SERVER – SSMS Automatically Generates TOP (100) PERCENT in Query Designer

    - by pinaldave
    Earlier this week, I was surfing various SQL forums to see what kind of help developer need in the SQL Server world. One of the question indeed caught my attention. I am here regenerating complete question as well scenario to illustrate the point in a precise manner. Additionally, I have added added second part of the question to give completeness. Question: I am trying to create a view in Query Designer (not in the New Query Window). Every time I am trying to create a view it always adds  TOP (100) PERCENT automatically on the T-SQL script. No matter what I do, it always automatically adds the TOP (100) PERCENT to the script. I have attempted to copy paste from notepad, build a query and a few other things – there is no success. I am really not sure what I am doing wrong with Query Designer. Here is my query script: (I use AdventureWorks as a sample database) SELECT Person.Address.AddressID FROM Person.Address INNER JOIN Person.AddressType ON Person.Address.AddressID = Person.AddressType.AddressTypeID ORDER BY Person.Address.AddressID This script automatically replaces by following query: SELECT TOP (100) PERCENT Person.Address.AddressID FROM Person.Address INNER JOIN Person.AddressType ON Person.Address.AddressID = Person.AddressType.AddressTypeID ORDER BY Person.Address.AddressID However, when I try to do the same from New Query Window it works totally fine. However, when I attempt to create a view of the same query it gives following error. Msg 1033, Level 15, State 1, Procedure myView, Line 6 The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified. It is pretty clear to me now that the script which I have written seems to need TOP (100) PERCENT, so Query . Why do I need it? Is there any work around to this issue. I particularly find this question pretty interesting as it really touches the fundamentals of the T-SQL query writing. Please note that the query which is automatically changed is not in New Query Editor but opened from SSMS using following way. Database >> Views >> Right Click >> New View (see the image below) Answer: The answer to the above question can be very long but I will keep it simple and to the point. There are three things to discuss in above script 1) Reason for Error 2) Reason for Auto generates TOP (100) PERCENT and 3) Potential solutions to the above error. Let us quickly see them in detail. 1) Reason for Error The reason for error is already given in the error. ORDER BY is invalid in the views and a few other objects. One has to use TOP or other keywords along with it. The way semantics of the query works where optimizer only follows(honors) the ORDER BY in the same scope or the same SELECT/UPDATE/DELETE statement. There is a possibility that one can order after the scope of the view again the efforts spend to order view will be wasted. The final resultset of the query always follows the final ORDER BY or outer query’s order and due to the same reason optimizer follows the final order of the query and not of the views (as view will be used in another query for further processing e.g. in SELECT statement). Due to same reason ORDER BY is now allowed in the view. For further accuracy and clear guidance I suggest you read this blog post by Query Optimizer Team. They have explained it very clear manner the same subject. 2) Reason for Auto Generated TOP (100) PERCENT One of the most popular workaround to above error is to use TOP (100) PERCENT in the view. Now TOP (100) PERCENT allows user to use ORDER BY in the query and allows user to overcome above error which we discussed. This gives the impression to the user that they have resolved the error and successfully able to use ORDER BY in the View. Well, this is incorrect as well. The way this works is when TOP (100) PERCENT is used the result is not guaranteed as well it is ignored in our the query where the view is used. Here is the blog post on this subject: Interesting Observation – TOP 100 PERCENT and ORDER BY. Now when you create a new view in the SSMS and build a query with ORDER BY to avoid the error automatically it adds the TOP 100 PERCENT. Here is the connect item for the same issue. I am sure there will be more connect items as well but I could not find them. 3) Potential Solutions If you are reading this post from the beginning in that case, it is clear by now that ORDER BY should not be used in the View as it does not serve any purpose unless there is a specific need of it. If you are going to use TOP 100 PERCENT with ORDER BY there is absolutely no need of using ORDER BY rather avoid using it all together. Here is another blog post of mine which describes the same subject ORDER BY Does Not Work – Limitation of the Views Part 1. It is valid to use ORDER BY in a view if there is a clear business need of using TOP with any other percentage lower than 100 (for example TOP 10 PERCENT or TOP 50 PERCENT etc). In most of the cases ORDER BY is not needed in the view and it should be used in the most outer query for present result in desired order. User can remove TOP 100 PERCENT and ORDER BY from the view before using the view in any query or procedure. In the most outer query there should be ORDER BY as per the business need. I think this sums up the concept in a few words. This is a very long topic and not easy to illustrate in one single blog post. I welcome your comments and suggestions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL View, T SQL, Technology

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >