Search Results

Search found 7294 results on 292 pages for 'parameters'.

Page 55/292 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • jQuery HOW TO?? pass additional parameters to success callback for $.ajax call ?

    - by dotnetgeek
    Hello jQuery Ninjas! I am trying, in vain it seems, to be able to pass additional parameters back to the success callback method that I have created for a successful ajax call. A little background. I have a page with a number of dynamically created textbox / selectbox pairs. Each pair having a dynamically assigned unique name such as name="unique-pair-1_txt-url" and name="unique-pair-1_selectBox" then the second pair has the same but the prefix is different. In an effort to reuse code, I have crafted the callback to take the data and a reference to the selectbox. However when the callback is fired the reference to the selectbox comes back as 'undefined'. I read here that it should be doable. I have even tried taking advantage of the 'context' option but still nothing. Here is the script block that I am trying to use: <script type="text/javascript" language="javascript"> $j = jQuery.noConflict(); function getImages(urlValue, selectBox) { $j.ajax({ type: "GET", url: $j(urlValue).val(), dataType: "jsonp", context: selectBox, success:function(data){ loadImagesInSelect(data, $j(this)) } , error:function (xhr, ajaxOptions, thrownError) { alert(xhr.status); alert(thrownError); } }); } function loadImagesInSelect(data, selectBox) { //var select = $j('[name=single_input.<?cs var:op_unique_name ?>.selImageList]'); var select = selectBox; select.empty(); $j(data).each(function() { var theValue = $j(this)[0]["@value"]; var theId = $j(this)[0]["@name"]; select.append("<option value='" + theId + "'>" + theValue + "</option>"); }); select.children(":first").attr("selected", true); } From what I have read, I feel I am close but I just cant put my finger on the missing link. Please help in your typical ninja stealthy ways. TIA

    Read the article

  • How can I place validating constraints on my method input parameters?

    - by rcampbell
    Here is the typical way of accomplishing this goal: public void myContractualMethod(final String x, final Set<String> y) { if ((x == null) || (x.isEmpty())) { throw new IllegalArgumentException("x cannot be null or empty"); } if (y == null) { throw new IllegalArgumentException("y cannot be null"); } // Now I can actually start writing purposeful // code to accomplish the goal of this method I think this solution is ugly. Your methods quickly fill up with boilerplate code checking the valid input parameters contract, obscuring the heart of the method. Here's what I'd like to have: public void myContractualMethod(@NotNull @NotEmpty final String x, @NotNull final Set<String> y) { // Now I have a clean method body that isn't obscured by // contract checking If those annotations look like JSR 303/Bean Validation Spec, it's because I borrowed them. Unfortunitely they don't seem to work this way; they are intended for annotating instance variables, then running the object through a validator. Which of the many Java design-by-contract frameworks provide the closest functionality to my "like to have" example? The exceptions that get thrown should be runtime exceptions (like IllegalArgumentExceptions) so encapsulation isn't broken.

    Read the article

  • How to test if raising an event results in a method being called conditional on value of parameters

    - by MattC
    I'm trying to write a unit test that will raise an event on a mock object which my test class is bound to. What I'm keen to test though is that when my test class gets it's eventhandler called it should only call a method on certain values of the eventhandlers parameters. My test seems to pass even if I comment the code that calls ProcessPriceUpdate(price); I'm in VS2005 so no lambdas please :( So... public delegate void PriceUpdateEventHandler(decimal price); public interface IPriceInterface{ event PriceUpdateEventHandler PriceUpdate; } public class TestClass { IPriceInterface priceInterface = null; TestClass(IPriceInterface priceInterface) { this.priceInterface = priceInterface; } public void Init() { priceInterface.PriceUpdate += OnPriceUpdate; } public void OnPriceUpdate(decimal price) { if(price > 0) ProcessPriceUpdate(price); } public void ProcessPriceUpdate(decimal price) { //do something with price } } And my test so far :s public void PriceUpdateEvent() { MockRepository mock = new MockRepository(); IPriceInterface pi = mock.DynamicMock<IPriceInterface>(); TestClass test = new TestClass(pi); decimal prc = 1M; IEventRaiser raiser; using (mock.Record()) { pi.PriceUpdate += null; raiser = LastCall.IgnoreArguments().GetEventRaiser(); Expect.Call(delegate { test.ProcessPriceUpdate(prc); }).Repeat.Once(); } using (mock.Playback()) { test.Init(); raiser.Raise(prc); } }

    Read the article

  • How to send parameters with same encoding from javascript?

    - by nimcap
    I have a javascript file that lots of people have embedded to their pages. Since I am hosting the file, I have control over that javascript file; I cannot control the way it is embedded because lots of people is using it already. This javascript file sends GET requests to my servlets, and the parameters passed with the request are recorded to DB. For example, javascript sends a request to http://myserver.com/servlet?p1=123&p2=aString and then servlet records 123 and aString to DB somehow. Before sending strings I use encodeURIComponent() to encode it. But what I figured out is every client sends the same string with different encodings depending on either their browser or the site they are visiting. As a result, same strings are represented with different characters when it reaches servlet (so they are different strings). What I am trying to do is to convert the strings to one kind of encoding from javascript so when they reach the client same words are represented with same characters. How is this possible? PS. If there is a way to convert the encoding from Java it is also applicable.

    Read the article

  • Is there a standard syntax for encoding structure objects as HTTP GET request parameters?

    - by lexicore
    Imagine we need to pass a a number structured objects to the web application - for instance, locale, layout settings and a definition of some query. This can be easily done with JSON or XML similar to the following fragment: <Locale>en</Locale> <Layout> <Block id="header">hide</Block> <Block id="footer">hide</Block> <Block id="navigation">minimize</Block> </Layout> <Query> <What>water</What> <When> <Start>2010-01-01</Start> </When> </Query> However, passing such structures with HTTP implies (roughly speaking) HTTP POST. Now assume we're limited to HTTP GET. Is there some kind of a standard solution for encoding structured data in HTTP GET request parameters? I can easily imagine something like: Locale=en& Layout.Block.header=hide& Layout.Block.footer=hide& Layout.Block.navigation=minimize& Query.What=water& Query.When.Start=2010-01-01 But what I'm looking for is a "standard" syntax, if there's any. ps. I'm surely aware of the problem with URL length. Please assume that it's not a problem in this case.

    Read the article

  • Function parameters evaluation order: is undefined behaviour if we pass reference?

    - by bolov
    This is undefined behaviour: void feedMeValue(int x, int a) { cout << x << " " << a << endl; } int main() { int a = 2; int &ra = a; feedMeValue(ra = 3, a); return 0; } because depending on what parameter gets evaluated first we could call (3, 2) or (3, 3). However this: void feedMeReference(int x, int const &ref) { cout << x << " " << ref << endl; } int main() { int a = 2; int &ra = a; feedMeReference(ra = 3, a); return 0; } will always output 3 3 since the second parameter is a reference and all parameters have been evaluated before the function call, so even if the second parameter is evaluated before of after ra = 3, the function received a reference to a wich will have a value of 2 or 3 at the time of the evaluation, but will always have the value 3 at the time of the function call. Is the second example UB? It is important to know because the compiler is free to do anything if he detects undefined behaviour, even if I know it would always yield the same results. *Note: I think that feedMeReference(a = 3, a) is the exact same situation as feedMeReference(ra = 3, a). However it seems not everybody agrees, in the addition to having 2 completely different answers.

    Read the article

  • How to use two parameters pointing to the same structure in one function ?

    - by ZaZu
    Hey guys, I have my code below that consits of a structure, a main, and a function. The function is supposed to display two parameters that have certain values, both of which point to the same structure. The problem I dont know how to add the SECOND parameter onto the following code : #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; for(i=0;i<5;i++){ printf("%f,(*test).what[z][x]); } return 0; } I need to add a new parameter test_2 there (IN THE SAME FUNCTION) using this code : for(i=0;i<5;i++){ printf("%f,(*test_2).what[z][x]); How does int trialtest(trial *test) changes ? and how does it change in main ? I know that I should declare test_2 as well, like this : trial test,test_2; But what about passing the address in the function ? I do not need to edit it right ? trialtest(&test); --- This will remain the same ? So please, tell me how would I use test_2 as a parameter pointing to the same structure as test, both in the same function.. Thank you !! Please tell me if you need more clarification

    Read the article

  • Is There a Better Way to Feed Different Parameters into Functions with If-Statements?

    - by FlowofSoul
    I've been teaching myself Python for a little while now, and I've never programmed before. I just wrote a basic backup program that writes out the progress of each individual file while it is copying. I wrote a function that determines buffer size so that smaller files are copied with a smaller buffer, and bigger files are copied with a bigger buffer. The way I have the code set up now doesn't seem very efficient, as there is an if loop that then leads to another if loops, creating four options, and they all just call the same function with different parameters. import os import sys def smartcopy(filestocopy, dest_path, show_progress = False): """Determines what buffer size to use with copy() Setting show_progress to True calls back display_progress()""" #filestocopy is a list of dictionaries for the files needed to be copied #dictionaries are used as the fullpath, st_mtime, and size are needed if len(filestocopy.keys()) == 0: return None #Determines average file size for which buffer to use average_size = 0 for key in filestocopy.keys(): average_size += int(filestocopy[key]['size']) average_size = average_size/len(filestocopy.keys()) #Smaller buffer for smaller files if average_size < 1024*10000: #Buffer sizes determined by informal tests on my laptop if show_progress: for key in filestocopy.keys(): #dest_path+key is the destination path, as the key is the relative path #and the dest_path is the top level folder copy(filestocopy[key]['fullpath'], dest_path+key, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, callback = None) #Bigger buffer for bigger files else: if show_progress: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600, callback = lambda pos, total: display_progress(pos, total, key)) else: for key in filestocopy.keys(): copy(filestocopy[key]['fullpath'], dest_path+key, 1024*2600) def display_progress(pos, total, filename): percent = round(float(pos)/float(total)*100,2) if percent <= 100: sys.stdout.write(filename + ' - ' + str(percent)+'% \r') else: percent = 100 sys.stdout.write(filename + ' - Completed \n') Is there a better way to accomplish what I'm doing? Sorry if the code is commented poorly or hard to follow. I didn't want to ask someone to read through all 120 lines of my poorly written code, so I just isolated the two functions. Thanks for any help.

    Read the article

  • how to update a table in c# using oledb parameters?

    - by sameer
    I am having a table which has three fields, namely LM_code,M_Name,Desc. LC_code is a autogenerated string Id, keeping this i am updating M_Name and Desc. I used normal update command, the value is passing in runtime but the fields are not getting updated. I hope using oledb parameters the fileds can be updated. Any help will be appreciated... Here is my code. public void Modify() { String query = "Update Master_Accounts set (M_Name='" + M_Name + "',Desc='" + Desc + "') where LM_code='" + LM_code + "'"; DataManager.RunExecuteNonQuery(ConnectionString.Constr, query); } In DataManager Class i am executing the query string. public static void RunExecuteNonQuery(string Constr, string query) { OleDbConnection myConnection = new OleDbConnection(Constr); try { myConnection.Open(); OleDbCommand myCommand = new OleDbCommand(query, myConnection); myCommand.ExecuteNonQuery(); } catch (Exception ex) { string Message = ex.Message; throw ex; } finally { if (myConnection.State == ConnectionState.Open) myConnection.Close(); } } private void toolstModify_Click_1(object sender, EventArgs e) { txtamcode.Enabled = true; jewellery.LM_code = txtamcode.Text; jewellery.M_Name = txtaccname.Text; jewellery.Desc = txtdesc.Text; jewellery.Modify(); MessageBox.Show("Data Updated Succesfully"); }

    Read the article

  • c# Why can't open generic types be passed as parameters?

    - by Rich Oliver
    Why can't open generic types be passed as parameters. I frequently have classes like: public class Example<T> where T: BaseClass { public int a {get; set;} public List<T> mylist {get; set;} } Lets say BaseClass is as follows; public BaseClass { public int num; } I then want a method of say: public int MyArbitarySumMethod(Example example)//This won't compile Example not closed { int sum = 0; foreach(BaseClass i in example.myList)//myList being infered as an IEnumerable sum += i.num; sum = sum * example.a; return sum; } I then have to write an interface just to pass this one class as a parameter as follows: public interface IExample { public int a {get; set;} public IEnumerable<BaseClass> myIEnum {get;} } The generic class then has to be modified to: public class Example<T>: IExample where T: BaseClass { public int a {get; set;} public List<T> mylist {get; set;} public IEnumerable<BaseClass> myIEnum {get {return myList;} } } That's a lot of ceremony for what I would have thought the compiler could infer. Even if something can't be changed I find it psychologically very helpful if I know the reasons / justifications for the absence of Syntax short cuts.

    Read the article

  • Performance of stored proc when updating columns selectively based on parameters?

    - by kprobst
    I'm trying to figure out if this is relatively well-performing T-SQL (this is SQL Server 2008). I need to create a stored procedure that updates a table. The proc accepts as many parameters as there are columns in the table, and with the exception of the PK column, they all default to NULL. The body of the procedure looks like this: CREATE PROCEDURE proc_repo_update @object_id bigint ,@object_name varchar(50) = NULL ,@object_type char(2) = NULL ,@object_weight int = NULL ,@owner_id int = NULL -- ...etc AS BEGIN update object_repo set object_name = ISNULL(@object_name, object_name) ,object_type = ISNULL(@object_type, object_type) ,object_weight = ISNULL(@object_weight, object_weight) ,owner_id = ISNULL(@owner_id, owner_id) -- ...etc where object_id = @object_id return @@ROWCOUNT END So basically: Update a column only if its corresponding parameter was provided, and leave the rest alone. This works well enough, but as the ISNULL call will return the value of the column if the received parameter was null, will SQL Server optimize this somehow? This might be a performance bottleneck on the application where the table might be updated heavily (insertion will be uncommon so the performance there is not a problem). So I'm trying to figure out what's the best way to do this. Is there a way to condition the column expressions with something like CASE WHEN or something? The table will be indexed up the wazoo as well for read performance. Is this the best approach? My alternative at this point is to create the UPDATE expression in code (e.g. inline SQL) and execute it against the server. This would solve my doubts about performance, but I'd rather leave this in a stored proc if possible.

    Read the article

  • How can I get the search parameters from jqgrid in the server side?

    - by Jack
    I've been visiting this forum a lot without registering since several months ago, and I really like it. So, thanks in advance to all the members. Now I'd like to make my first question. I've been using Jqgrid for a few time, and I've managed to have it display the rows and buttons, but now I need to do a search, a complex one, and I thought that "automatically" jqgrid would send the parameters to the server, I mean: sField, searchField, sOper, searchOper, sValue, searchString, sFilter and/or filters I'm not sure at all which ones it has to send, and I thought it would be just the same as it sends 'page', 'rows' and 'sord'. But I'm missing something, because, for example, I can get 'page', 'rows' and 'sord' using: $limit = $this->getRequest()->getParam('rows', 10); but I get nothing by using: $params = $_REQUEST['filters'] or $params = $this->getRequest()->getParam('sFilter'); I'm using PHP, Zend and json. I didn't post any code because my doubt is kind of generic, but I will do it if it was needed. I've searched a lot, and read the documentation, but I just don't see it. I will appreciate your help, thanks!

    Read the article

  • Is there a way to prevent SQL Server silently truncating data in local variables and stored procedure parameters?

    - by Luke Woodward
    I recently encountered an issue while porting an app to SQL Server. It turned out that this issue was caused by a stored procedure parameter being declared too short for the data being passed to it: the parameter was declared as VARCHAR(100) but in one case was being passed more than 100 characters of data. What surprised me was that SQL Server didn't report any errors or warnings -- it just silently truncated the data to 100 characters. The following SQLCMD session demonstrates this: 1 create procedure WhereHasMyDataGone (@data varchar(5)) as 2 begin 3 print 'Your data is ''' + @data + '''.'; 4 end; 5 go 1 exec WhereHasMyDataGone '123456789'; 2 go Your data is '12345'. Local variables also exhibit the same behaviour: 1 declare @s varchar(5) = '123456789'; 2 print @s; 3 go 12345 Is there an option I can enable to have SQL Server report errors (or at least warnings) in such situations? Or should I just declare all local variables and stored procedure parameters as VARCHAR(MAX) or NVARCHAR(MAX)?

    Read the article

  • Why are my bound parameters all identical (using Linq)?

    - by Scott Stafford
    When I run this snippet of code: string[] words = new string[] { "foo", "bar" }; var results = from row in Assets select row; foreach (string word in words) { results = results.Where(row => row.Name.Contains(word)); } I get this SQL: -- Region Parameters DECLARE @p0 VarChar(5) = '%bar%' DECLARE @p1 VarChar(5) = '%bar%' -- EndRegion SELECT ... FROM [Assets] AS [t0] WHERE ([t0].[Name] LIKE @p0) AND ([t0].[Name] LIKE @p1) Note that @p0 and @p1 are both bar, when I wanted them to be foo and bar. I guess Linq is somehow binding a reference to the variable word rather than a reference to the string currently referenced by word? What is the best way to avoid this problem? (Also, if you have any suggestions for a better title for this question, please put it in the comments.) Note that I tried this with regular Linq also, with the same results (you can paste this right into Linqpad): string[] words = new string[] { "f", "a" }; string[] dictionary = new string[] { "foo", "bar", "jack", "splat" }; var results = from row in dictionary select row; foreach (string word in words) { results = results.Where(row => row.Contains(word)); } results.Dump(); Dumps: bar jack splat

    Read the article

  • ref and out parameters in C# and cannot be marked as variant.

    - by Water Cooler v2
    What does the statement mean? From here ref and out parameters in C# and cannot be marked as variant. 1) Does it mean that the following can not be done. public class SomeClass<R, A>: IVariant<R, A> { public virtual R DoSomething( ref A args ) { return null; } } 2) Or does it mean I cannot have the following. public delegate R Reader<out R, in A>(A arg, string s); public static void AssignReadFromPeonMethodToDelegate(ref Reader<object, Peon> pReader) { pReader = ReadFromPeon; } static object ReadFromPeon(Peon p, string propertyName) { return p.GetType().GetField(propertyName).GetValue(p); } static Reader<object, Peon> pReader; static void Main(string[] args) { AssignReadFromPeonMethodToDelegate(ref pReader); bCanReadWrite = (bool)pReader(peon, "CanReadWrite"); Console.WriteLine("Press any key to quit..."); Console.ReadKey(); } I tried (2) and it worked.

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • How can I use functools.partial on multiple methods on an object, and freeze parameters out of order

    - by Joseph Garvin
    I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameter being frozen (think of it as generalizing partial to apply to classes). I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works: >>> def foo(x, y): ... print x, y ... >>> bar = bind(foo, y=3) >>> bar(2) 2 3 But my proxy class does not work, and I'm not sure why: >>> class Foo(object): ... def bar(self, x, y): ... print x, y ... >>> a = Foo() >>> b = PureProxy(a, bar=bind(Foo.bar, y=3)) >>> b.bar(2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: bar() takes exactly 3 arguments (2 given) I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows. from types import MethodType class PureProxy(object): def __init__(self, underlying, **substitutions): self.underlying = underlying for name in substitutions: subst_attr = substitutions[name] if hasattr(subst_attr, "underlying"): setattr(self, name, MethodType(subst_attr, self, PureProxy)) def __getattribute__(self, name): return getattr(object.__getattribute__(self, "underlying"), name) def bind(f, *args, **kwargs): """ Lets you freeze arguments of a function be certain values. Unlike functools.partial, you can freeze arguments by name, which has the bonus of letting you freeze them out of order. args will be treated just like partial, but kwargs will properly take into account if you are specifying a regular argument by name. """ argspec = inspect.getargspec(f) argdict = copy(kwargs) if hasattr(f, "im_func"): f = f.im_func args_idx = 0 for arg in argspec.args: if args_idx >= len(args): break argdict[arg] = args[args_idx] args_idx += 1 num_plugged = args_idx def new_func(*inner_args, **inner_kwargs): args_idx = 0 for arg in argspec.args[num_plugged:]: if arg in argdict: continue if args_idx >= len(inner_args): # We can't raise an error here because some remaining arguments # may have been passed in by keyword. break argdict[arg] = inner_args[args_idx] args_idx += 1 f(**dict(argdict, **inner_kwargs)) new_func.underlying = f return new_func

    Read the article

  • C++ vs. C++/CLI: Const qualification of virtual function parameters

    - by James McNellis
    [All of the following was tested using Visual Studio 2008 SP1] In C++, const qualification of parameter types does not affect the type of a function (8.3.5/3: "Any cv-qualifier modifying a parameter type is deleted") So, for example, in the following class hierarchy, Derived::Foo overrides Base::Foo: struct Base { virtual void Foo(const int i) { } }; struct Derived : Base { virtual void Foo(int i) { } }; Consider a similar hierarchy in C++/CLI: ref class Base abstract { public: virtual void Foo(const int) = 0; }; ref class Derived : public Base { public: virtual void Foo(int i) override { } }; If I then create an instance of Derived: int main(array<System::String ^> ^args) { Derived^ d = gcnew Derived; } it compiles without errors or warnings. When I run it, it throws the following exception and then terminates: An unhandled exception of type 'System.TypeLoadException' occurred in ClrVirtualTest.exe Additional information: Method 'Foo' in type 'Derived'...does not have an implementation. That exception seems to indicate that the const qualification of the parameter does affect the type of the function in C++/CLI (or, at least it affects overriding in some way). However, if I comment out the line containing the definition of Derived::Foo, the compiler reports the following error (on the line in main where the instance of Derived is instantiated): error C2259: 'Derived': cannot instantiate abstract class If I add the const qualifier to the parameter of Derived::Foo or remove the const qualifier from the parameter of Base::Foo, it compiles and runs with no errors. I would think that if the const qualification of the parameter affects the type of the function, I should get this error if the const qualification of the parameter in the derived class virtual function does not match the const qualification of the parameter in the base class virtual function. If I change the type of Derived::Foo's parameter from an int to a double, I get the following warning (in addition to the aforementioned error, C2259): warning C4490: 'override': incorrect use of override specifier; 'Derived::Foo' does not match a base ref class method So, my question is, effectively, does the const qualification of function parameters affect the type of the function in C++/CLI? If so, why does this compile and why are there no errors or warnings? If not, why is an exception thrown?

    Read the article

  • Using ClrProfiler

    - by Roman Dorevich
    Hello, I am trying to use CLRProfiler. I need to enter some parameters, so I used the the File-set parameters option and added both parameters and the working directory. When the application starts it takes some parameters from a inifile but the clr fails to find parameters from the inifile cause it concrat it with the working directory. thanks

    Read the article

  • Identifying which pattern fits better.

    - by Daniel Grillo
    I'm developing a software to program a device. I have some commands like Reset, Read_Version, Read_memory, Write_memory, Erase_memory. Reset and Read_Version are fixed. They don't need parameters. Read_memory and Erase_memory need the same parameters that are Length and Address. Write_memory needs Lenght, Address and Data. For each command, I have the same steps in sequence, that are something like this sendCommand, waitForResponse, treatResponse. I'm having difficulty to identify which pattern should I use. Factory, Template Method, Strategy or other pattern. Edit I'll try to explain better taking in count the given comments and answers. I've already done this software and now I'm trying to refactoring it. I'm trying to use patterns, even if it is not necessary because I'm taking advantage of this little software to learn about some patterns. Despite I think that one (or more) pattern fits here and it could improve my code. When I want to read version of the software of my device, I don't have to assembly the command with parameters. It is fixed. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To read a portion of the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len and Address. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To write a portion in the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len, Address and Data. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. I think that I could use Template Method because I have almost the same algorithm for all. But the problem is some commands are fixes, others have 2 or 3 parameters. I think that parameters should be passed on the constructor of the class. But each class will have a constructor overriding the abstract class constructor. Is this a problem for the template method? Should I use other pattern?

    Read the article

  • Should accessible members of an internal class be internal too?

    - by Jeff Mercado
    I'm designing a set of APIs for some applications I'm working on. I want to keep the code style consistent in all the classes I write but I've found that there are a few inconsistencies that I'm introducing and I don't know what the best way to resolve them is. My example here is specific to C# but this would apply to any language with similar mechanisms. There are a few classes that I need for implementation purposes that I don't necessarily want to expose in the API so I make them internal whereever needed. Generally what I would do is design the class as I normally would (e.g., make members public/protected/private where necessary) and change the visibility level of the class itself to internal. So I might have a few classes that look like this: internal interface IMyItem { ItemSet AddTo(ItemSet set); } internal class _SmallItem : IMyItem { private readonly /* parameters */; public _SmallItem(/* small item parameters */) { /* ... */ } public ItemSet AddTo(ItemSet set) { /* ... */ } } internal abstract class _CompositeItem: IMyItem { private readonly /* parameters */; public _CompositeItem(/* composite item parameters */) { /* ... */ } public abstract object UsefulInformation { get; } protected void HelperMethod(/* parameters */) { /* ... */ } } internal class _BigItem : _CompositeItem { private readonly /* parameters */; public _BigItem(/* big item parameters */) { /* ... */ } public override object UsefulInformation { get { /* ... */ } } public ItemSet AddTo(ItemSet set) { /* ... */ } } In another generated class (part of a parser/scanner), there is a structure that contains fields for all possible values it can represent. The class generated is internal too but I have control over the visibility of the members and decided to make them internal as well. internal partial struct ValueType { internal string String; internal ItemSet ItemSet; internal IMyItem MyItem; } internal class TokenValue { internal static int EQ(ItemSetScanner scanner) { /* ... */ } internal static int NAME(ItemSetScanner scanner, string value) { /* ... */ } internal static int VALUE(ItemSetScanner scanner, string value) { /* ... */ } //... } To me, this feels odd because the first set of classes, I didn't necessarily have to make some members public, they very well could have been made internal. internal members of an internal type can only be accessed internally anyway so why make them public? I just don't like the idea that the way I write my classes has to change drastically (i.e., change all uses of public to internal) just because the class is internal. Any thoughts on what I should do here? It makes sense to me that I might want to make some members of a class declared public, internal. But it's less clear to me when the class is declared internal.

    Read the article

  • .NET datetime issue with SQL stored procedure

    - by DanO
    I am getting the below error when executing my application on a Windows XP machine with .NET 2.0 installed. On my computer Windows 7 .NET 2.0 - 3.5 I am not having any issues. The target SQL server version is 2005. This error started occurring when I added the datetime to the stored procedure. I have been reading alot about using .NET datetime with SQL datetime and I still have not figured this out. If someone can point me in the right direction I would appreciate it. Here is the where I believe the error is coming from. private static void InsertRecon(string computerName, int EncryptState, TimeSpan FindTime, Int64 EncryptSize, DateTime timeWritten) { SqlConnection DBC = new SqlConnection("server=server;UID=InventoryServer;Password=pass;database=Inventory;connection timeout=30"); SqlCommand CMD = new SqlCommand(); try { CMD.Connection = DBC; CMD.CommandType = CommandType.StoredProcedure; CMD.CommandText = "InsertReconData"; CMD.Parameters.Add("@CNAME", SqlDbType.NVarChar); CMD.Parameters.Add("@ENCRYPTEXIST", SqlDbType.Int); CMD.Parameters.Add("@RUNTIME", SqlDbType.Time); CMD.Parameters.Add("@ENCRYPTSIZE", SqlDbType.BigInt); CMD.Parameters.Add("@TIMEWRITTEN", SqlDbType.DateTime); CMD.Parameters["@CNAME"].Value = computerName; CMD.Parameters["@ENCRYPTEXIST"].Value = EncryptState; CMD.Parameters["@RUNTIME"].Value = FindTime; CMD.Parameters["@ENCRYPTSIZE"].Value = EncryptSize; CMD.Parameters["@TIMEWRITTEN"].Value = timeWritten; DBC.Open(); CMD.ExecuteNonQuery(); } catch (System.Data.SqlClient.SqlException e) { PostMessage(e.Message); } finally { DBC.Close(); CMD.Dispose(); DBC.Dispose(); } } Unhandled Exception: System.ArgumentOutOfRangeException: The SqlDbType enumeration value, 32, is invalid. Parameter name: SqlDbType at System.Data.SqlClient.MetaType.GetMetaTypeFromSqlDbType(SqlDbType target) at System.Data.SqlClient.SqlParameter.set_SqlDbType(SqlDbType value) at System.Data.SqlClient.SqlParameter..ctor(String parameterName, SqlDbType dbType) at System.Data.SqlClient.SqlParameterCollection.Add(String parameterName, SqlDbType sqlDbType) at ReconHelper.getFilesInfo.InsertRecon(String computerName, Int32 EncryptState, TimeSpan FindTime, Int64 EncryptSize, DateTime timeWritten) at ReconHelper.getFilesInfo.Main(String[] args)

    Read the article

  • how to bind parameters correctly in example below in mysqli?

    - by user1421767
    In old mysql code, I had a query below which worked perfectly which is below: $questioncontent = (isset($_GET['questioncontent'])) ? $_GET['questioncontent'] : ''; $searchquestion = $questioncontent; $terms = explode(" ", $searchquestion); $questionquery = " SELECT q.QuestionId, q.QuestionContent, o.OptionType, an.Answer, r.ReplyType, FROM Answer an INNER JOIN Question q ON q.AnswerId = an.AnswerId JOIN Reply r ON q.ReplyId = r.ReplyId JOIN Option_Table o ON q.OptionId = o.OptionId WHERE "; foreach ($terms as $each) { $i++; if ($i == 1){ $questionquery .= "q.QuestionContent LIKE `%$each%` "; } else { $questionquery .= "OR q.QuestionContent LIKE `%$each%` "; } } $questionquery .= "GROUP BY q.QuestionId, q.SessionId ORDER BY "; $i = 0; foreach ($terms as $each) { $i++; if ($i != 1) $questionquery .= "+"; $questionquery .= "IF(q.QuestionContent LIKE `%$each%` ,1,0)"; } $questionquery .= " DESC "; But since that old mysql is fading away that people are saying to use PDO or mysqli (Can't use PDO because of version of php I have currently got), I tried changing my code to mysqli, but this is giving me problems. In the code below I have left out the bind_params command, my question is that how do I bind the parameters in the query below? It needs to be able to bind multiple $each because the user is able to type in multiple terms, and each $each is classed as a term. Below is current mysqli code on the same query: $questioncontent = (isset($_GET['questioncontent'])) ? $_GET['questioncontent'] : ''; $searchquestion = $questioncontent; $terms = explode(" ", $searchquestion); $questionquery = " SELECT q.QuestionId, q.QuestionContent, o.OptionType, an.Answer, r.ReplyType, FROM Answer an INNER JOIN Question q ON q.AnswerId = an.AnswerId JOIN Reply r ON q.ReplyId = r.ReplyId JOIN Option_Table o ON q.OptionId = o.OptionId WHERE "; foreach ($terms as $each) { $i++; if ($i == 1){ $questionquery .= "q.QuestionContent LIKE ? "; } else { $questionquery .= "OR q.QuestionContent LIKE ? "; } } $questionquery .= "GROUP BY q.QuestionId, q.SessionId ORDER BY "; $i = 0; foreach ($terms as $each) { $i++; if ($i != 1) $questionquery .= "+"; $questionquery .= "IF(q.QuestionContent LIKE ? ,1,0)"; } $questionquery .= " DESC "; $stmt=$mysqli->prepare($questionquery); $stmt->execute(); $stmt->bind_result($dbQuestionId,$dbQuestionContent,$dbOptionType,$dbAnswer,$dbReplyType); $questionnum = $stmt->num_rows();

    Read the article

  • Terminating a long-executing thread and then starting a new one in response to user changing parameters via UI in an applet

    - by user1817170
    I have an applet which creates music using the JFugue API and plays it for the user. It allows the user to input a music phrase which the piece will be based on, or lets them choose to have a phrase generated randomly. I had been using the following method (successfully) to simply stop and start the music, which runs in a thread using the Player class from JFugue. I generate the music using my classes and user input from the applet GUI...then... private playerThread pthread; private Thread threadPlyr; private Player player; (from variables declaration) public void startMusic(Pattern p) // pattern is a JFugue object which holds the generated music { if (pthread == null) { pthread = new playerThread(); } else { pthread = null; pthread = new playerThread(); } if (threadPlyr == null) { threadPlyr = new Thread(pthread); } else { threadPlyr = null; threadPlyr = new Thread(pthread); } pthread.setPattern(p); threadPlyr.start(); } class playerThread implements Runnable // plays midi using jfugue Player { private Pattern pt; public void setPattern(Pattern p) { pt = p; } @Override public void run() { try { player.play(pt); // takes a couple mins or more to execute resetGUI(); } catch (Exception exception) { } } } And the following to stop music when user presses the stop/start button while Player.isPlaying() is true: public void stopMusic() { threadPlyr.interrupt(); threadPlyr = null; pthread = null; player.stop(); } Now I want to implement a feature which will allow the user to change parameters while the music is playing, create an updated music pattern, and then play THAT pattern. Basically, the idea is to make it simulate "real time" adjustments to the generated music for the user. Well, I have been beating my head against the wall on this for a couple of weeks. I've read all the standard java documentation, researched, read, and searched forums, and I have tried many different ideas, none of which have succeeded. The problem I've run into with all approaches I've tried is that when I start the new thread with the new, updated musical pattern, all the old threads ALSO start, and there is a cacophony of unintelligible noise instead of my desired output. From what I've gathered, the issue seems to be that all the methods I've come across require that the thread is able to periodically check the value of a "flag" variable and then shut itself down from within its "run" block in response to that variable. However, since my thread makes a call that takes several minutes minimum to execute (playing the music), and I need to terminate it WHILE it is executing this, there is really no safe way to do so. So, I'm wondering if there is something I'm missing when it comes to threads, or if perhaps I can accomplish my goal using a totally different approach. Any ideas or guidance is greatly appreciated! Thank you!

    Read the article

  • HLSL/XNA Ambient light texture mixed up with multi pass lighting

    - by Manu-EPITA
    I've been having some troubles lately with lighting. I have found a source on google which is working pretty good on the example. However, when I try to implement it to my current project, I am getting some very weird bugs. The main one is that my textures are "mixed up" when I only activate the ambient light, which means that a model gets the texture of another one . I am using the same effect for every meshes of my models. I guess this could be the problem, but I don't really know how to "reset" an effect for a new model. Is it possible? Here is my shader: float4x4 WVP; float4x4 WVP; float3x3 World; float3 Ke; float3 Ka; float3 Kd; float3 Ks; float specularPower; float3 globalAmbient; float3 lightColor; float3 eyePosition; float3 lightDirection; float3 lightPosition; float spotPower; texture2D Texture; sampler2D texSampler = sampler_state { Texture = <Texture>; MinFilter = anisotropic; MagFilter = anisotropic; MipFilter = linear; MaxAnisotropy = 16; }; struct VertexShaderInput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 Normal : NORMAL0; }; struct VertexShaderOutput { float4 Position : POSITION0; float2 Texture : TEXCOORD0; float3 PositionO: TEXCOORD1; float3 Normal : NORMAL0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = mul(input.Position, WVP); output.Normal = input.Normal; output.PositionO = input.Position.xyz; output.Texture = input.Texture; return output; } float4 PSAmbient(VertexShaderOutput input) : COLOR0 { return float4(Ka*globalAmbient + Ke,1) * tex2D(texSampler,input.Texture); } float4 PSDirectionalLight(VertexShaderOutput input) : COLOR0 { //Difuze float3 L = normalize(-lightDirection); float diffuseLight = max(dot(input.Normal,L), 0); float3 diffuse = Kd*lightColor*diffuseLight; //Specular float3 V = normalize(eyePosition - input.PositionO); float3 H = normalize(L + V); float specularLight = pow(max(dot(input.Normal,H),0),specularPower); if(diffuseLight<=0) specularLight=0; float3 specular = Ks * lightColor * specularLight; //sum all light components float3 light = diffuse + specular; return float4(light,1) * tex2D(texSampler,input.Texture); } technique MultiPassLight { pass Ambient { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PSAmbient(); } pass Directional { PixelShader = compile ps_3_0 PSDirectionalLight(); } } And here is how I actually apply my effects: public void ApplyLights(ModelMesh mesh, Matrix world, Texture2D modelTexture, Camera camera, Effect effect, GraphicsDevice graphicsDevice) { graphicsDevice.BlendState = BlendState.Opaque; effect.CurrentTechnique.Passes["Ambient"].Apply(); foreach (ModelMeshPart part in mesh.MeshParts) { graphicsDevice.SetVertexBuffer(part.VertexBuffer); graphicsDevice.Indices = part.IndexBuffer; // Texturing graphicsDevice.BlendState = BlendState.AlphaBlend; if (modelTexture != null) { effect.Parameters["Texture"].SetValue( modelTexture ); } graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); // Applying our shader to all the mesh parts effect.Parameters["WVP"].SetValue( world * camera.View * camera.Projection ); effect.Parameters["World"].SetValue(world); effect.Parameters["eyePosition"].SetValue( camera.Position ); graphicsDevice.BlendState = BlendState.Additive; // Drawing lights foreach (DirectionalLight light in DirectionalLights) { effect.Parameters["lightColor"].SetValue(light.Color.ToVector3()); effect.Parameters["lightDirection"].SetValue(light.Direction); // Applying changes and drawing them effect.CurrentTechnique.Passes["Directional"].Apply(); graphicsDevice.DrawIndexedPrimitives( PrimitiveType.TriangleList, part.VertexOffset, 0, part.NumVertices, part.StartIndex, part.PrimitiveCount ); } } I am also applying this when loading the effect: effect.Parameters["lightColor"].SetValue(Color.White.ToVector3()); effect.Parameters["globalAmbient"].SetValue(Color.White.ToVector3()); effect.Parameters["Ke"].SetValue(0.0f); effect.Parameters["Ka"].SetValue(0.01f); effect.Parameters["Kd"].SetValue(1.0f); effect.Parameters["Ks"].SetValue(0.3f); effect.Parameters["specularPower"].SetValue(100); Thank you very much UPDATE: I tried to load an effect for each model when drawing, but it doesn't seem to have changed anything. I suppose it is because XNA detects that the effect has already been loaded before and doesn't want to load a new one. Any idea why?

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >