Search Results

Search found 239 results on 10 pages for 'interpretation'.

Page 5/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Using Bitmap.LockBits and Marshal.Copy in IronPython not changing image as expected

    - by Leonard H Martin
    Hi all, I have written the following IronPython code: import clr clr.AddReference("System.Drawing") from System import * from System.Drawing import * from System.Drawing.Imaging import * originalImage = Bitmap("Test.bmp") def RedTint(bitmap): bmData = bitmap.LockBits(Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb) ptr = bmData.Scan0 bytes = bmData.Stride * bitmap.Height rgbValues = Array.CreateInstance(Byte, bytes) Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes) for i in rgbValues[::3]: i = 255 Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes) bitmap.UnlockBits(bmData) return bitmap newPic = RedTint(originalImage) newPic.Save("New.bmp") Which is my interpretation of this MSDN code sample: http://msdn.microsoft.com/en-us/library/5ey6h79d.aspx except that I am saving the altered bitmap instead of displaying it in a Form. The code runs, however the newly saved bitmap is an exact copy of the original image with no sign of any changes having occurred (it's supposed to create a red tint). Could anyone advise what's wrong with my code? The image I'm using is simply a 24bpp bitmap I created in Paint (it's just a big white rectangle!), using IronPython 2.6 and on Windows 7 (x64) with .Net Framework 3.5 SP1 installed.

    Read the article

  • chomsky hierarchy and programming languages

    - by dader51
    Hi, I'm trying to learn some aspects of the CH ( chomsky hierarchy ) which are related to PL ( programming languages ), and i still have to read the Dragon Book. I've read that most of the PL can be parsed as CFG ( context free grammar ). In term of computational power, it equals the one of a pushdown non deterministic automaton. Am I right ? If it's true, then how could a CFG holds a UG ( unrestricted grammar, which is turing complete ) ? I'm asking because, even if PL are CFG they are actually used to describe TM (turing machines ) and through UG. I think that's because of at least two different levels of computing, the first, which is the parsing of a CFG focuses on the syntax related to the structure ( representation ? ) of the language, while the other focuses on the semantic ( sense, interpretation of the data itself ? ) related to the capabilities of the pl which is turing complete. Again, are these assumptions rights ? thanx a lot.

    Read the article

  • WPF Localization Using LocBaml: Handling Special Symbols

    - by Aryeh
    Hello, I’m dealing with localization of a WPF application (Visual Studio 2010 under Windows 7). I’ve just accomplished the whole process of localization using LocBaml tool, as explained in WPF Globalization and Localization Overview and in related posts. The target language is Italian (it-IT culture). When I run my application in Italian, I have a problem with interpretation of the special symbols of © and ™: they both appear there as a white question sign upon a black diamond-shaped background. The symbols © and ™ appear identically in both English and Italian CSV-files. I tried also the special letters (such as È, à etc.) that are present in Italian but absent in English, and they also are interpreted as the above diamond-shaped question. In Region and Language, I changed the system locale to Italian[Italy], restarted the PC and ran the application again – this helped me in the past to cope with a similar problem in localization of C++ applications under Windows XP, but now it didn’t help, either. Has somebody any idea what is the catch here?

    Read the article

  • How to determine what the Seemingly Unrelated Regression error means in R

    - by user2154571
    I'm using the systemfit() function to conduct a seemingly unrelated regression and am getting the following error: Error in solve(sigma, tol = solvetol) : Lapack routine dsptrf returned error code 1 Yet, I'm unable to find the meaningful interpretation of what the error suggests is going on. Below is some simulated code that works to show what functions I'm using (the simulated code does not produce an error). Thanks for thoughts on this error. y <- sample(seq(1:4), 100, replace = TRUE) x1 <- sample(seq(0:1), 100, replace = TRUE) -1 x2 <- sample(seq(0:1), 100, replace = TRUE) - 1 x3 <- sample(seq(1:4), 100, replace = TRUE) frame <- as.data.frame(cbind(y,x1,x2, x3)) mod_1 <- y ~ x1 + x3 + x1:x3 mod_2 <- y ~ x2 + x3 + x2:x3 output <- systemfit(list(mod_1, mod_2), data = frame, method = "SUR")

    Read the article

  • Regex question: Why isn't this matching?

    - by AllenG
    I have the following regex: (?<=\.\d+?)0+(?=\D|$) I'm running it against a string which contains the following: SVC~NU^0270~313.3~329.18~~10~~6.00: When it runs, it matches the 6.00 (correctly) which my logic then trims by one zero to turn into 6.0. The regex then runs again (or should) but fails to pick up the 6.0. I'm by no means an expert on Regex, but my understanding of my expression is that it's looking for a decimal with 1 or more optional (so, really zero or more) digits prior to one or more zeros which are then followed by any non-digit character or the line break. Assuming that interpretation is correct, I can't see why it wouldn't match on the second pass. For that matter, I'm not sure why my Regex.Replace isn't matching the full 6.00 on the first pass and removing both of the trailing zeros... Any suggestions?

    Read the article

  • String Field Sizes for unicode database fields using different data access components

    - by Serg
    mjustin in his question 1 and question 2 says that TWideStringField.Size property for UTF8 fields in Delphi 2009 dbExpress is 4 times larger than the logical field size (max number of characters in the field). I inclined to consider this a dbExpress bug. That is what Delphi 2009 Help says: The interpretation of Size depends on the data type. The meaning of Size for data types that use it is given in the following table. For all other data types, Size is not used and its value is always 0. ftString - Size is the maximum number of characters in the string. I am using FibPlus 6.9.9 and it follows the above documentation - the string field size is the maximum number of characters, not bytes. So the question also implies the following question: Are DbExpress drivers in Delphi 2009 unusable for unicode databases?

    Read the article

  • Subversion Repository Layout

    - by Tim Long
    Most subversion tools create a default repository layout with /trunk, /branches and /tags. The documentation also recommends not using separate repositories for each project, so that code can be more easily shared. Following that advice has led to me having a repository with the following layout: /trunk /Project1 /Project2 /branches /Project1 /Project2 /tags /Project1 /Project2 and so on, you get the idea. Over time, I've found this structure a bit clumsy and it occurred to me that there's an alternative interpretation of the recommendations, such as: /Project1 /trunk /branches /tags /Project2 /trunk /branches /tags So, which layout do people use, and why? Or - is there another way to do things that I've completely missed?

    Read the article

  • Assembly Jump conditionals -- jae vs. jbe

    - by Raven Dreamer
    Hi, all! I'm working on an assembly program (intel 8086). I'm trying to determine whether an input character (stored in dl) is within a certain range of hex values. cmp dl, 2Eh ;checks for periods je print ;jumps to print a "." input cmp dl, 7Ah ;checks for outside of wanted range jae input ; returns to top Please confirm that this is a correct interpretation of my code: step 1: if dl = 2E, goto print Step 2: if dl = 7A is false, goto input [if dl < 7A, goto input]

    Read the article

  • the problem about different treatment to __VA_ARGS__ when using VS 2008 and GCC

    - by liuliu
    I am trying to identify a problem because of an unusual usage of variadic macros. Here is the hypothetic macro: #define va(c, d, ...) c(d, __VA_ARGS__) #define var(a, b, ...) va(__VA_ARGS__, a, b) var(2, 3, printf, “%d %d %d\n”, 1); For gcc, the preprocessor will output printf("%d %d %d\n", 1, 2, 3) but for VS 2008, the output is printf, “%d %d %d\n”, 1(2, 3); I suspect the difference is caused by the different treatment to VA_ARGS, for gcc, it will first expand the expression to va(printf, "%d %d %d\n", 1, 2, 3), and treat 1, 2, 3 as the VA_ARGS for macro va. But for VS 2008, it will first treat b as VA_ARGS for macro va, and then do the expansion. Which one is correct interpretation for C99 variadic macro? or my usage falls into an undefined behavior?

    Read the article

  • Can I mark an Email as "High Importance" for Outlook using System.Net.Mail?

    - by ccornet
    Part of the application I'm working on for my client involves sending emails for events. Sometimes these are highly important. My client, and most of my client's clients, use Outlook, which has the ability to mark a mail message as High Importance. Now, I know it is callous to assume that all end users will be using the same interface, sp I am not. But considering you can send email from Outlook as High Importance even if the target is not necessarily reading through Outlook, that means that there is basically some data stored... somehow... that lets Outlook know if a particular message was assigned as High Importance. That's my interpretation, at least. The application currently uses System.Net.Mail to send out emails, using System.Net.Mail.MailMessages for writing them and System.Net.Mail.SmtpClient to send them. Is it possible to set this "High Importance" setting with System.Net.Mail's abilities? If not, is there any assembly available which can configure this setting?

    Read the article

  • In the following implementation of static_strlen, why are the & and parentheses around str necessary

    - by Ben
    If I change the type to const char str[Len], I get the following error: error: no matching function for call to ‘static_strlen(const char [5])’ Am I correct that static_strlen expects an array of const char references? My understanding is that arrays are passed as pointers anyway, so what need is there for the elements to be references? Or is that interpretation completely off-the-mark? #include <iostream> template <size_t Len> size_t static_strlen(const char (&str)[Len]) { return Len - 1; } int main() { std::cout << static_strlen("oyez") << std::endl; return 0; }

    Read the article

  • How to explain to a client that you've gone over-budget and you'll need more money/time to deliver w

    - by General Tapioca
    My situation is that I have agreed on a per-project proposal with the client. The proposal is vague, but still names functionality in a way that can be argued as to whether it's included or not, while leaving some room for interpretation. I originally pressed as much as I could to get a per-month contract, arguing that the project is mostly non-predictable, but the client refused. Being a small company, I had to fold and signed a contract on an estimate based on my group's estimations. At this point we have reached completion on about 85% of the features (we think) but we ran out of budget. We have been working for almost two years with this client in previous contracts, and we have delivered a good product that they are happy with, so we have a good standing relationship. More info: -There has been a bit of scope-creep, but I don't think enough for me to hide behind that argument -We've been delivering partial releases about monthly. -We don't have systematic user-testing in place.

    Read the article

  • Screening (multi)collinearity in a regression model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • Why do open source projects cling on 0.x versions for too long?

    - by ssg
    I see many open source projects insist on staying in 0.xxx version for a very long time despite that the product has been proven useful and very stable. Trac is one example. They even risked switching from 0.9 to 0.10 which might confuse a lot of users about which is more recent. I wonder if this is a cultural paradigm, an honor code in open source community or simply a strict interpretation of release cycle management? Would a person who releases first version as "1.0 beta" be banished from open source world, or more realistically appeal less number of contributors? For some projects it even looks like they will never switch to 1.0 ever but only approximating only half way each time, like Zeno's paradox.

    Read the article

  • Screening (multi)collinearity in a reggresion model

    - by aL3xa
    I hope that this one is not going to be "ask-and-answer" question... here goes: (multi)collinearity refers to extremely high correlations between predictors in the regression model. How to cure them... well, sometimes you don't need to "cure" collinearity, since it doesn't affect regression model itself, but interpretation of an effect of individual predictors. One way to spot collinearity is to put each predictor as a dependent variable, and other predictors as independent variables, determine R2, and if it's larger than .9 (or .95), we can consider predictor redundant. This is one "method"... what about other approaches? Some of them are time consuming, like excluding predictors from model and watching for b-coefficient changes - they should be noticeably different. Of course, we must always bare in mind specific context/goal of analysis... Sometimes, only remedy is to repeat a research, but right now, I'm interested in various ways of screening redundant predictors when (multi)collinearity occurs in a regression model.

    Read the article

  • XML multiline comments in C# - what am I doing wrong?

    - by Dave
    According to this article, it's possible to get multiline XML comments -- instead of using ///, use /** */. This is my interpretation of what multiline comments are, and what I want to have happen: /** * <summary> * this comment is on line 1 in the tooltip * this comment is on line 2 in the tooltip * </summary> */ However, when I use this form, the tooltip that pops up when I hover over my class name in my code is single-line, i.e. it looks exactly as if I had written my comment like this: /// <summary> /// this comment is on line 1 in the tooltip /// this comment is on line 2 in the tooltip /// </summary> Is this behavior actually possible still in VS2008?

    Read the article

  • Efficiently compute the row sums of a 3d array in R

    - by Gavin Simpson
    Consider the array a: > a <- array(c(1:9, 1:9), c(3,3,2)) > a , , 1 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 , , 2 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 How do we efficiently compute the row sums of the matrices indexed by the third dimension, such that the result is: [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 ?? The column sums are easy via the 'dims' argument of colSums(): > colSums(a, dims = 1) but I cannot find a way to use rowSums() on the array to achieve the desired result, as it has a different interpretation of 'dims' to that of colSums(). It is simple to compute the desired row sums using: > apply(a, 3, rowSums) [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 but that is just hiding the loop. Are there other efficient, truly vectorised, ways of computing the required row sums?

    Read the article

  • AsyncTask Threading Rule - Can it really only be used once?

    - by stormin986
    In the documentation on AsyncTask it gives the following as a rule related to threading: The task can be executed only once (an exception will be thrown if a second execution is attempted.) All this means is that you have to create a new instance of the class every time you want to use it, right? In other words, it must be done like this: new DownloadFilesTask().execute(url1, url2, url3); new DownloadFilesTask().execute(url4, url5, url6); Or conversely, you can NOT do the following: DownloadFilesTask dfTask = new DownloadFilesTask(); dfTask().execute(url1, url2, url3); dfTask().execute(url4, url5, url6); Can someone verify this is an accurate interpretation? I realize I pretty much just answered this for myself as I was typing this out... But it wasn't immediately obvious to me so I think this would be useful to have posted nonetheless.

    Read the article

  • Interpreted vs. Compiled vs. Late-Binding

    - by zubin71
    Python is compiled into an intermediate bytecode(pyc) and then executed. So, there is a compilation followed by interpretation. However, long-time Python users say that Python is a "late-binding" language and that it should`nt be referred to as an interpreted language. How would Python be different from another interpreted language? Could you tell me what "late-binding" means, in the Python context? Java is another language which first has source code compiled into bytecode and then interpreted into bytecode. Is Java an interpreted/compiled language? How is it different from Python in terms of compilation/execution? Java is said to not have, "late-binding". Does this have anything to do with Java programs being slighly faster than Python? Itd be great if you could also give me links to places where people have already discussed this; id love to read more on this. Thank you.

    Read the article

  • Visual C++: breakpoints disabled

    - by John
    I have a 'release with debug info' unmanaged c++ .exe (built with VS2005) deployed onto another PC, the .exe and .pdb are in the same folder. When I try to attach to the process from VS2005, either locally or remotely from my dev PC, all my breakpoints become disabled. I don't get any warning/error popups which makes me think the PDB file is being found, but not seen as 'good'. Is that the right interpretation? I think if it couldn't see the PDB I'd get a "no debug information could be found" popup. Has anyone got any ideas what can be wrong?

    Read the article

  • Compatible types and structures in C

    - by Oli Charlesworth
    I have the following code: int main(void) { struct { int x; } a, b; struct { int x; } c; struct { int x; } *p; b = a; /* OK */ c = a; /* Doesn't work */ p = &a; /* Doesn't work */ return 0; } which fails to compile under GCC (3.4.6), with the following error: test.c:8: error: incompatible types in assignment test.c:9: warning: assignment from incompatible pointer type Now, from what I understand (admittedly from the C99 standard), is that a and c should be compatible types, as they fulfill all the criteria in section 6.2.7, paragraph 1. I've tried compiling with std=c99, to no avail. Presumably my interpretation of the standard is wrong?

    Read the article

  • typedef resolution rule

    - by kumar_m_kiran
    Hi All, Can you Please tell me the resolution rule involved in resolving the meaning of the variable in a typedef. Any link related to the same will be very useful. Example #typedef string* pstring; const pstring parr; Here confusion arises whether const'ness is for pointer or the content. Now based on what thumb rule do can we start resolving the above interpretation of the pstring?. Smilarly, If I have a very complex typedef'ed variable, like #typedef void (func*)()(int), I should be able to resolve it using the thumb rule. Thanks in advance for your suggestions

    Read the article

  • C++Math evaluating incorrectly

    - by Hayden
    I thought I can make life a little easier in data statistics by making a small program which returns the results of sampling distribution of the mean (with standard error). It does this part successfully but in an attempt to return the z-score by using the formula I found here, it returns -1#IND. My interpretation of that formula is: ((1/(sqrt(2*pi)*stdev))*pow(e, (normalpow)) where double normalpow=-0.5*((mean-popmean)*(mean-popmean)/stdev) I did a little more investigating and found that (mean-popmean)*(mean-popmean) was evaluating to 0 no matter what. How can I get around this problem of normalpow evaluating to 0.

    Read the article

  • Spring Framework HttpRequestHandler failure

    - by sharadva
    We have an application which communicates via REST requests made by clients. The REST requests contain "region name" and a "ID" as parameters So, a request would look something like this (for a DELETE) http://host:port/regionnameID These REST requests between regions in a federation are properly URL encoded I find that these request fail if the region name has a slash ("/") in it. Then, the request would look like so http://host:port/region/nameID This is due to incorrect interpretation of the Rest URL by HttpRequesthandler when there is a '/' in the region name. Now, we have no control over clients sending REST request with "/" in the Region name. Is there any method / configuration / workaround that can be done to prevent the HttpRequestHandler from returning 404

    Read the article

  • Elegant PostgreSQL Group by for Ruby on Rails / ActiveRecord

    - by digitalfrost
    Trying to retrieve an array of ActiveRecord Objects grouped by date with PostgreSQL. More specifically I'm trying to translate the following MySQL querry: @posts = Post.all(:group => "date(date)", :conditions => ["location_id = ? and published = ?", @location.id, true], :order => "created_at DESC") I am aware that PostgreSQL interpretation of the SQL standard is stricter than MySQL and that consequently this type of query won't work...and have read a number of posts on StackOverflow and elsewhere on the subject - but none of them seem to be the definitive answer on this subject I've tried various combinations of queries with group by and distinct clauses without much joy - and for the moment I have a rather inelegant hack which although works makes me blush when I look at it. What is the proper way to make such a querry with Rails and PostgreSQL ? (Ignoring the fact that surely this should be abstracted away at the ActiveRecord Level)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >