Search Results

Search found 11403 results on 457 pages for 'conditional move'.

Page 17/457 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to find "\r" in a conditional?

    - by Werner
    Hi, in a C++ program some string reads info from file, and in some part contains a "\r" character. I need to remove it, afte the read, in order to avoid problems. I thought about comparing strings character to character, I thought that "\r" would take two chars, but not, it is just one. how would i use a conditional ? if char[4]==`\r' ??? Thanks P.D. How would the problem be solved in C?

    Read the article

  • Conditional installation with Wix

    - by Luca
    Is it possible to have a conditional installation configuration, slaved wth the Visual Studio configuration environment? For example, selecting DEBUG or RELEASE configuration, Wix selects different executables in the built installation.

    Read the article

  • xslt param conditional check

    - by LB
    I have a: <xsl:param name="SomeFlag" /> In my XSLT template, I want to do a conditional check on SomeFlag. Currently I'm doing it as: <xsl:if test="$SomeFlag = true"> SomeFlag is true! </xsl:if> Is this how we evaluate the the flag? I'm setting the param in C# as: xslarg.AddParam("SomeFlag", String.Empty, true); Any ideas?

    Read the article

  • What would be the fastest way of storing or calculating legal move sets for chess pieces?

    - by ioSamurai
    For example if a move is attempted I could just loop through a list of legal moves and compare the x,y but I have to write logic to calculate those at least every time the piece is moved. Or, I can store in an array [,] and then check if x and y are not 0 then it is a legal move and then I save the data like this [0][1][0][0] etc etc for each row where 1 is a legal move, but I still have to populate it. I wonder what the fastest way to store and read a legal move on a piece object for calculation. I could use matrix math as well I suppose but I don't know. Basically I want to persist the rules for a given piece so I can assign a piece object a little template of all it's possible moves considering it is starting from it's current location, which should be just one table of data. But is it faster to loop or write LINQ queries or store arrays and matrices? public class Move { public int x; public int y; } public class ChessPiece : Move { private List<Move> possibleMoves { get; set; } public bool LegalMove(int x, int y){ foreach(var p in possibleMoves) { if(p.x == x && p.y == y) { return true; } } } } Anyone know?

    Read the article

  • Outlook 2007 - Right Click Email > Move To {Folder Name}

    - by HK1
    I know it seems like an elementary question. What's the simplest and fastest way to move a read/completed email to a different folder in Outlook 2007 (connected to Exchange 2007)? I have a particular user that is challenged by technology. Using keyboard shortcuts is not an option. Dragging and dropping things - forget it. And too many clicks is frustrating to him. He keeps his email inbox completely clean (OCD=True) but he does that by deleting every single email as quickly as he's done with it. If an email can't be resolved in a day or two it almost drives him to insanity. As far as he's concerned, there's only one right thing to do with an email - reply to it and then delete it. He's being asked to save emails unless they are clearly trash. I'm trying to figure out what the simplest method is to move an email to a "Saved Emails" or "Archived" folder (don't confuse "folder" with .PST file, that's irrelevant for this discussion). I envisioned that I could possibly hi-jack every delete and put the email in his Saved folder. But I don't like this option because some emails are truly trash and I don't want him saving those. What I'd really like to do is something like this: Right Click Email in List > Move To {Folder Name} Is there a simple way to do this? Maybe someone has another suggestion on how to handle this situation.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • win 7: copy all files from a complex folder structure to one folder

    - by Jason
    I'd have a complex folder structure like: h:\folder1\folder2\folder3 h:\folder1\folder2\a h:\folder1\b h:\folder1\folder3 h:\folder1\folder4\d Only with probably 100's of folders and a depth of around 4 (I'm guessing) Anyway I want to run a command that will move all the files from every sub folder into the top level folder. so something like h:\folder1\*\*.* to h:\folder1 Is there a tool I can use to do this? does win 7 have a command that will do this? Thanks Jason

    Read the article

  • Xen guest miration to host with missing features

    - by deploymonkey
    If I want to move a xen guet (domU) from one host (dom0) to another host on another hardware platform which misses some capabilities, say virtualisation features, especially directio or likewise, will my image be able to run despite missing capabilities of the new host? This is important because I need to know if I can prepare XEN images on my workstation with full virtualisation features and deploy same images on less capable Servers later on. Thanks for the help and input

    Read the article

  • Moving a window that can't be dragged by the title bar?

    - by ccat
    I have Fallout Collection and I am trying to run it windowed because of the low resolution. When I change the mode to windowed via the .ini file, however, it starts the game with the window shoved into the upper left corner of my screen. Clicking the title bar of the window just clicks back into the game program. So how can I move it to the center of my screen? I am using Windows 7 professional 64-bit.

    Read the article

  • conditional selects with jQuery and the Validation plugin

    - by dbonomo
    Hi, I've got a form that I am validating with the jQuery validation plugin. I would like to add a conditional select box (a selection box that is populated/shown depending on the selection of another) and have it validate as well. Here is what I have so far: $(document).ready(function(){ $("#customer_information").validate({ //disable the submit button after it is clicked to prevent multiple submissions submitHandler: function(form){ if(!this.wasSent){ this.wasSent = true; $(':submit', form).val('Please wait...') .attr('disabled', 'disabled') .addClass('disabled'); form.submit(); } else { return false; } }, //Customizes error placement errorPlacement: function(error, element) { error.insertAfter(element) error.wrap("<div class=\"form_error\">") } }); $(".courses").hide(); $("#course_select").change(function() { switch($(this).val()){ case "Certificates": $(".courses").hide().parent().find("#Certificates").show(); $(".filler").hide(); break; case "Associates": $(".courses").hide().parent().find("#Associates").show(); $(".filler").hide(); break; case "": $(".filler").show(); $(".courses").hide(); } }); }); And the HTML: <select id="course_select"> <option value="">Please Select</option> <option value="Certificates">Certificates</option> <option value="Associates">Associates</option> </select> <div id="Form0" class="filler"><select name="filler_select"><option value="">Please Select Course Type</option></select></div> <div id="Associates" class="courses"> <select name="lead_source_id" id="Requested Program" class="required"> <option value="">Please Select</option> <option value="01">Health Information Technology</option> <option value="02">Human Resources </option> <option value="03">Marketing </option> </select> </div> <div id="Certificates" class="courses"> <select name="lead_source_id" id="Requested Program" class="required"> <option value="">Please Select</option> <option value="04">Accounting Services</option> <option value="05">Bookkeeping</option> <option value="06">Child Day Care</option> </select> </div> So far, the select is working for me, but validation thinks that the field is empty even when a value is selected. It looks like there are a ton of ways to do conditional selects in jQuery. This was the best way I managed to work out (I'm new to jQuery), but I'd love to hear what you folks feel is the "best" way, especially if it works well with the validation plugin. Thanks!

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • C# Conditional Compilation and framework targets

    - by McKAMEY
    There are a few minor places where code for my project may be able to be drastically improved if the target framework were a newer version. I'd like to be able to better leverage conditional compilation in C# to switch these as needed. Something like: #if NET_40 using FooXX = Foo40; #elif NET_35 using FooXX = Foo35; #else using FooXX = Foo20; #endif Do these symbols come for free? Do I need to inject these symbols as part of the project configuration? Seems easy enough to do since I'll know which framework is being targeted from msbuild. I think I've seen that NET_40 symbol isn't defined? If so I think I could do this? #if !NET_35 && !NET_20 #define NET_40 #endif Or do I need to define it in the msbuild command: /p:DefineConstants="NET_40"

    Read the article

  • pass value from embedded function into conditional of page the embedded function is included on

    - by Brad
    I have a page that includes/embeds a file that contains a number of functions. One of the functions has a variable I want to pass back onto the page that the file is embedded on. <?php include('functions.php'); userInGroup(); if($user_in_group) { print 'user is in group'; } else { print 'user is not in group'; } ?> function within functions.php <?php function userInGroup() { foreach($group_access as $i => $group) { if($group_session == $group) { $user_in_group = TRUE; break; } else { $user_in_group == FALSE; } } }?> I am unsure as to how I can pass the value from the function userInGroup back to the page it runs the conditional if($user_in_group) on Any help is appreciated.

    Read the article

  • Missing Java error on conditional expression?

    - by Federico Cristina
    With methods test1() and test2(), I get a Type Mismatch Error: Cannot convert from null to int, which is correct; but why am I not getting the same in method test3()? How does Java evaluates the conditional expression differently in that case? (obviusly, a NullPointerException will rise in runtime). Is it a missing error? public class Test { public int test1(int param) { return null; } public int test2(int param) { if (param > 0) return param; return null; } public int test3(int param) { return (param > 0 ? return param : return null); } } Thanks in advance!

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • PDO update query with conditional?

    - by dmontain
    I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • Update query with conditional?

    - by dmontain
    I'm not sure if this possible. If not, let me know. I have a PDO mysql that updates 3 fields. $update = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2, field3=:field3 WHERE key=:key"); But I want field3 to be updated only when $update3 = true; (meaning that the update of field3 is controlled by a conditional statement) Is this possible to accomplish with a single query? I could do it with 2 queries where I update field1 and field2 then check the boolean and update field3 if needed in a separate query. //run this query to update only fields 1 and 2 $update_part1 = $mypdo->prepare("UPDATE tablename SET field1=:field1, field2=:field2 WHERE key=:key"); //if field3 should be update, run a separate query to update it separately if ($update3){ $update_part2 = $mypdo->prepare("UPDATE tablename SET field3=:field3 WHERE key=:key"); } But hopefully there is a way to accomplish this in 1 query?

    Read the article

  • USing Min/Max with conditional operator

    - by user638501
    Hello All, I am trying to run a query to find max and min values, and then use a conditional operator. however when I try to run the following query, it gives me error - "misuse of aggregate: min()". My query is: SELECT a.prim_id, min(b.new_len*36) as min_new_len, max(b.new_len*36) as max_new_len FROM tb_first a, tb_second b WHERE a.sec_id = b.sec_id AND min_new_len > 1900 AND max_new_len < 75000 GROUP BY a.prim_id ORDER BY avg(b.new_len*36); Any suggestions ?

    Read the article

  • Cell color change In Excel Using Conditional formatting in C#

    - by Suryakavitha
    Hi, I have exported datatable to excel successfully... Now i have to change some cell color using Conditional formatting in Excel sheet using C#.... For example if a cell contains text as "Cat" it should be display in Green color and if a cell contains text as "Dog" it should display in blue Color.... Now how can i do this ? Plz Anyone tell me the solution of this ... or give me the code for it... Thanks In Advance..

    Read the article

  • Using ant, rename a directory without knowing the full path?

    - by mixonic
    Howdy friends, Given a zipfile with an unknown directory, how can I rename or move that directory to a normalized path? <!-- Going to fetch some stuff --> <target name="get.remote"> <!-- Get the zipfile --> <get src="http://myhost.com/package.zip" dest="package.zip"/> <!-- Unzip the file --> <unzip src="package.zip" dest="./"/> <!-- Now there is a package-3d28djh3 directory. The part after package- is a hash and cannot be known ahead of time --> <!-- Remove the zipfile --> <delete file="package.zip"/> <!-- Now we need to rename "package-3d28djh3" to "package". My best attempt is below, but it just moves package-3d28djh3 into package instead of renaming the directory. --> <!-- Make a new home for the contents. --> <mkdir dir="package" /> <!-- Move the contents --> <move todir="package/"> <fileset dir="."> <include name="package-*/*"/> </fileset> </move> </target> I'm not much of an ant user, any insight would be helpful. Thanks much, -Matt

    Read the article

  • How can I support conditional validation of model properties

    - by Jeff
    I currently have a form that I am building that needs to support two different versions. Each version might use a different subset of form fields. I have to do this to support two different clients, but I don't want to have entirely different controller actions for both. So, I am trying to come up with a way to use a strongly typed model with validation attributes but have some of these attributes be conditional. Some approaches I can think of is similar to steve sanderson's partial validation approach. Where I would clear the model errors in a filter OnActionExecuting based on which version of the form was active. The other approach I was thinking of would to break the model up into pieces using something like class FormModel { public Form1 Form1Model {get; set;} public Form2 FormModel {get; set;} } and then we would validate appropriately depending on

    Read the article

  • How to do conditional comments in drupal?

    - by wamp
    <!-- Additional IE/Win specific style sheet (Conditional Comments) --> <!--[if IE]> <style type="text/css" media="all">@import "files/tabs-ie.css";</style> <script type="text/javascript" src="files/DOMAssistantCompressed-2.7.4.js"></script> <script type="text/javascript" src="files/ie-css3.js"></script> <![endif]--> .. <!--[if IE 7.0]> <link rel="stylesheet" href="files/ie-7.css" type="text/css" media="all" charset="utf-8" /> <![endif]--> How to do the above in drupal?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >