Search Results

Search found 8381 results on 336 pages for 'bad neighbor'.

Page 83/336 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • pinpointing "conditional jump or move depends on uninitialized value(s)" valgrind message

    - by kamziro
    So I've been getting some mysterious uninitialized values message from valgrind and it's been quite the mystery as of where the bad value originated from. Seems that valgrind shows the place where the unitialised value ends up being used, but not the origin of the uninitialised value. ==11366== Conditional jump or move depends on uninitialised value(s) ==11366== at 0x43CAE4F: __printf_fp (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x43C6563: vfprintf (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x43EAC03: vsnprintf (in /lib/tls/i686/cmov/libc-2.7.so) ==11366== by 0x42D475B: (within /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42E2C9B: std::ostreambuf_iterator<char, std::char_traits<char> > std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::_M_insert_float<double>(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, char, double) const (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42E31B4: std::num_put<char, std::ostreambuf_iterator<char, std::char_traits<char> > >::do_put(std::ostreambuf_iterator<char, std::char_traits<char> >, std::ios_base&, char, double) const (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x42EE56F: std::ostream& std::ostream::_M_insert<double>(double) (in /usr/lib/libstdc++.so.6.0.9) ==11366== by 0x81109ED: Snake::SnakeBody::syncBodyPos() (ostream:221) ==11366== by 0x810B9F1: Snake::Snake::update() (snake.cpp:257) ==11366== by 0x81113C1: SnakeApp::updateState() (snakeapp.cpp:224) ==11366== by 0x8120351: RoenGL::updateState() (roengl.cpp:1180) ==11366== by 0x81E87D9: Roensachs::update() (rs.cpp:321) As can be seen, it gets quite cryptic.. especially because when it's saying by Class::MethodX, it sometimes points straight to ostream etc. Perhaps this is due to optimization? ==11366== by 0x81109ED: Snake::SnakeBody::syncBodyPos() (ostream:221) Just like that. Is there something I'm missing? What is the best way to catch bad values without having to resort to super-long printf detective work?

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • Foosball result prediction

    - by Wolf
    In our office, we regularly enjoy some rounds of foosball / table football after work. I have put together a small java program that generates random 2vs2 lineups from the available players and stores the match results in a database afterwards. The current prediction of the outcome uses a simple average of all previous match results from the 4 involved players. This gives a very rough estimation, but I'd like to replace it with something more sophisticated, taking into account things like: players may be good playing as attacker but bad as defender (or vice versa) players do well against a specific opponent / bad against others some teams work well together, others don't skills change over time What would be the best algorithm to predict the game outcome as accurately as possible? Someone suggested using a neural network for this, which sounds quite interesting... but I do not have enough knowledge on the topic to say if that could work, and I also suspect it might take too many games to be reasonably trained. EDIT: Had to take a longer break from this due to some project deadlines. To make the question more specific: Given the following mysql table containing all matches played so far: table match_result match_id int pk match_start datetime duration int (match length in seconds) blue_defense int fk to table player blue_attack int fk to table player red_defense int fk to table player red_attack int fk to table player score_blue int score_red int How would you write a function predictResult(blueDef, blueAtk, redDef, redAtk) {...} to estimate the outcome as closely as possible, executing any sql, doing calculations or using external libraries?

    Read the article

  • homebrew path issue

    - by Shaun Stanislaus
    Master:~ shaunstanislaus$ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) ==> This script will install: /usr/local/bin/brew /usr/local/Library/... /usr/local/share/man/man1/brew.1 Press enter to continue ==> Downloading and Installing Homebrew... remote: Counting objects: 82368, done. remote: Compressing objects: 100% (39323/39323), done. remote: Total 82368 (delta 56782), reused 65301 (delta 42220) Receiving objects: 100% (82368/82368), 11.68 MiB | 1.59 MiB/s, done. Resolving deltas: 100% (56782/56782), done. From https://github.com/mxcl/homebrew * [new branch] master -> origin/master HEAD is now at 2ea1a0e smpeg: depends on gtk ==> Installation successful! You should run `brew doctor' *before* you install anything. Now type: brew help Master:~ shaunstanislaus$ brew doctor -bash: /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory Master:~ shaunstanislaus$ echo $PATH /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin:/Users/shaunstanislaus/Library/Application Support/GoodSync:/opt/local/bin:/opt/local/sbin:/usr/local/sbin:/Users/shaunstanislaus/.ec2/bin:/Users/shaunstanislaus/.rvm/bin /usr/local/bin/brew: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby: bad interpreter: No such file or directory how do i fix this path issue? i can't use brew command and i think i previously symlink to wrong location. please advice, thank you.

    Read the article

  • How to get JSON code into MYSQL database?

    - by the_boy_za
    I've created this form with a jQuery autocomplete function. The selected brand from the autocomplete list needs to get sent to a PHP file using $.ajax function. My code doesn't seem to work but i can't find the error. I don't know why the data isn't getting inserted into MYSQL database. Here is my code: JQUERY: $(document).ready(function() { $("#autocomplete").autocomplete({ minLength: 2 }); $("#autocomplete").autocomplete({ source: ["Adidas", "Airforce", "Alpha Industries", "Asics", "Bikkemberg", "Birkenstock", "Bjorn Borg", "Brunotti", "Calvin Klein", "Cars Jeans", "Chanel", "Chasin", "Diesel", "Dior", "DKNY", "Dolce & Gabbana"] }); $("#add-brand").click(function() { var merk = $("#autocomplete").val(); $("#selected-brands").append(" <a class=\"deletemerk\" href=\"#\">" + merk + "</a>"); //Add your parameters here var param = JSON.stringify({ Brand: merk }); $.ajax({ type: "POST", async: true, url: "scripttohandlejson.php", contentType: "application/json", data: param, dataType: "json", success: function (good){ //handle success alert(good) }, failure: function (bad){ //handle any errors alert(bad) } }); return false; }); }); PHP FILE: scripttohandlejson.php <?PHP $getcontent = json_decode($json, true); $getcontent->{'Brand'}; $vraag ="INSERT INTO kijken (merk) VALUES ='$getcontent' "; $voerin = mysql_query($vraag) or die("couldnt put into db"); <?

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • Returning true or error message in Ruby

    - by seaneshbaugh
    I'm wondering if writing functions like this is considered good or bad form. def test(x) if x == 1 return true else return "Error: x is not equal to one." end end And then to use it we do something like this: result = test(1) if result != true puts result end result = test(2) if result != true puts result end Which just displays the error message for the second call to test. I'm considering doing this because in a rails project I'm working on inside my controller code I make calls to a model's instance methods and if something goes wrong I want the model to return the error message to the controller and the controller takes that error message and puts it in the flash and redirects. Kinda like this def create @item = Item.new(params[:item]) if [email protected]? result = @item.save_image(params[:attachment][:file]) if result != true flash[:notice] = result redirect_to(new_item_url) and return end #and so on... That way I'm not constructing the error messages in the controller, merely passing them along, because I really don't want the controller to be concerned with what the save_image method itself does just whether or not it worked. It makes sense to me, but I'm curious as to whether or not this is considered a good or bad way of writing methods. Keep in mind I'm asking this in the most general sense pertaining mostly to ruby, it just happens that I'm doing this in a rails project, the actual logic of the controller really isn't my concern.

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Are women worse developers than men? [closed]

    - by Ekaterina
    Hi people, I am a software engineer and a woman. I constantly keep hearing all these jokes around me, about women in programming. They (they - stands for male colleagues) keep pointing out the differences in thinking between men and women. The truth is that when I started working as a developer, my colleagues gave a hard time only because I am a woman. They automatically assumed that I want to do only html and styling, and didn't even me giving me the chance to do something different. I am a .NET programmer and I really disliked (and still dislike) front-end developing. I do agree men and women think differently, but I don't agree that necessarily is a bad thing. Different approach of problems/goals brings more ideas and diversity. I really believe that there are good developer and bad developers despite the male/female factor. I am curious to hear overall opinion though. Would you not hire a woman developer only because is a woman? Cheers!

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Risking the exception anti-pattern.. with some modifications

    - by Sridhar Iyer
    Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a: returnCode=DEFAULT; try { returnCode=libraryAPI1(); } catch(...) { returnCode=BAD; } return returnCode; The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad. Things CAN go horribly wrong. E.g. if the try block(or libraryAPI1()) had: func1(); char *x=malloc(1000); func2(); if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome. Could you please tell me what other things can possibly go wrong in this scenario?

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • Unable to HTTP PUT with libcurl to django-piston

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • How can I properly handle 404s in ASP.NET MVC?

    - by Brian
    I am just getting started on ASP.NET MVC so bear with me. I've searched around this site and various others and have seen a few implementations of this. EDIT: I forgot to mention I am using RC2 Using URL Routing: routes.MapRoute( "Error", "{*url}", new { controller = "Errors", action = "NotFound" } //404s ); The above seems to take care of requests like this (assuming default route tables setup by initial MVC project): "/blah/blah/blah/blah" Overriding HandleUnknownAction() in the controller itself: //404s - handle here (bad action requested protected override void HandleUnknownAction(string actionName) { ViewData["actionName"] = actionName; View("NotFound").ExecuteResult(this.ControllerContext); } However the previous strategies do not handle a request to a Bad/Unknown controller. For example, I do not have a "/IDoNotExist", if I request this I get the generic 404 page from the web server and not my 404 if I use routing + override. So finally, my question is: Is there any way to catch this type of request using a route or something else in the MVC framework itself? OR should I just default to using Web.Config customErrors as my 404 handler and forget all this? I assume if I go with customErrors I'll have to store the generic 404 page outside of /Views due to the Web.Config restrictions on direct access. Anyway any best practices or guidance is appreciated.

    Read the article

  • What is the preferred way in C++ for converting a builtin type (int) to bool?

    - by Martin
    When programming with Visual C++, I think every developer is used to see the warning warning C4800: 'BOOL' : forcing value to bool 'true' or 'false' from time to time. The reason obviously is that BOOL is defined as int and directly assigning any of the built-in numerical types to bool is considered a bad idea. So my question is now, given any built-in numerical type (int, short, ...) that is to be interpreted as a boolean value, what is the/your preferred way of actually storing that value into a variable of type bool? Note: While mixing BOOL and bool is probably a bad idea, I think the problem will inevitably pop up whether on Windows or somewhere else, so I think this question is neither Visual-C++ nor Windows specific. Given int nBoolean; I prefer this style: bool b = nBoolean?true:false; The following might be alternatives: bool b = !!nBoolean; bool b = (nBoolean != 0); Is there a generally preferred way? Rationale? I should add: Since I only work with Visual-C++ I cannot really say if this is a VC++ specific question or if the same problem pops up with other compilers. So it would be interesting to specifically hear from g++ or users how they handle the int-bool case. Regarding Standard C++: As David Thornley notes in a comment, the C++ Standard does not require this behavior. In fact it seems to explicitly allow this, so one might consider this a VC++ weirdness. To quote the N3029 draft (which is what I have around atm.): 4.12 Boolean conversions [conv.bool] A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a prvalue of type bool. A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true. (...)

    Read the article

  • How to get pixel information inside a fragment shader?

    - by user697111
    In my fragment shader I can load a texture, then do this: uniform sampler2D tex; void main(void) { vec4 color = texture2D(tex, gl_TexCoord[0].st); gl_FragColor = color; } That sets the current pixel to color value of texture. I can modify these, etc and it works well. But a few questions. How do I tell "which" pixel I am? For example, say I want to set pixel 100,100 (x,y) to red. Everything else to black. How do I do a : "if currentSelf.Position() == (100,100); then color=red; else color=black?" ? I know how to set colors, but how do I get "my" location? Secondly, how do I get values from a neighbor pixel? I tried this: vec4 nextColor = texture2D(tex, gl_TexCoord[1].st); But not clear what it is returning? if I'm pixel 100,100; how do I get the values from 101,100 or 100,101?

    Read the article

  • Creating a Function in SQL Server with a Phone Number as a parameter and returns a Random Number

    - by Emer
    Hi Guys, I am hoping someone can help me here as google is not being as forthcoming as I would have liked. I am relatively new to SQL Server and so this is the first function I have set myself to do. The outline of the function is that it has a Phone number varchar(15) as a parameter, it checks that this number is a proper number, i.e. it is 8 digits long and contains only numbers. The main character I am trying to avoid is '+'. Good Number = 12345678 Bad Number = +12345678. Once the number is checked I would like to produce a random number for each phone number that is passed in. I have looked at substrings, the like operator, Rand(), left(), Right() in order to search through the number and then produce a random number. I understand that Rand() will produce the same random number unless alterations are done to it but right now it is about actually getting some working code. Any hints on this would be great or even point me towards some more documentation. I have read books online and they haven't helped me, maybe I am not looking in the right places. Here is a snippet of code I was working on the Rand declare @Phone Varchar (15) declare @Counter Varchar (1) declare @NewNumber Varchar(15) set @Phone = '12345678' set @Counter = len(@Phone) while @Counter > 0 begin select case when @Phone like '%[0-9]%' then cast(rand()*100000000 as int) else 'Bad Number' end set @counter = @counter - 1 end return Thanks for the help in advance Emer

    Read the article

  • What is wrong with accessing DBI directly?

    - by canavanin
    Hi everyone! I'm currently reading Effective Perl Programming (2nd edition). I have come across a piece of code which was described as being poorly written, but I don't yet understand what's so bad about it, or how it should be improved. It would be great if someone could explain the matter to me. Here's the code in question: sub sum_values_per_key { my ( $class, $dsn, $user, $password, $parameters ) = @_; my %results; my $dbh = DBI->connect( $dsn, $user, $password, $parameters ); my $sth = $dbh->prepare( 'select key, calculate(value) from my_table'); $sth->execute(); # ... fill %results ... $sth->finish(); $dbh->disconnect(); return \%results; } The example comes from the chapter on testing your code (p. 324/325). The sentence that has left me wondering about how to improve the code is the following: Since the code was poorly written and accesses DBI directly, you'll have to create a fake DBI object to stand in for the real thing. I have probably not understood a lot of what the book has so far been trying to teach me, or I have skipped the section relevant for understanding what's bad practice about the above code... Well, thanks in advance for your help!

    Read the article

  • Unable to HTTP PUT with libcurl

    - by Jesse Beder
    I'm trying to PUT data using libcurl to mimic the command curl -u test:test -X PUT --data-binary @data.yaml "http://127.0.0.1:8000/foo/" which works correctly. My options look like: curl_easy_setopt(handle, CURLOPT_USERPWD, "test:test"); curl_easy_setopt(handle, CURLOPT_URL, "http://127.0.0.1:8000/foo/"); curl_easy_setopt(handle, CURLOPT_VERBOSE, 1); curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_data); curl_easy_setopt(handle, CURLOPT_READDATA, &yaml); curl_easy_setopt(handle, CURLOPT_INFILESIZE, yaml.size()); curl_easy_perform(handle); I believe the read_data function works correctly, but if you ask, I'll post that code. I'm using Django with django-piston, and my update function is never called! (It is called when I use the command line version above.) libcurl's output is: * About to connect() to 127.0.0.1 port 8000 (#0) * Trying 127.0.0.1... * connected * Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0) * Server auth using Basic with user 'test' > PUT /foo/ HTTP/1.1 Authorization: Basic dGVzdDp0ZXN0 Host: 127.0.0.1:8000 Accept: */* Content-Length: 244 Expect: 100-continue * Done waiting for 100-continue ** this is where my read_data handler confirms: read 244 bytes ** * HTTP 1.0, assume close after body < HTTP/1.0 400 BAD REQUEST < Date: Thu, 13 May 2010 08:22:52 GMT < Server: WSGIServer/0.1 Python/2.5.1 < Vary: Authorization < Content-Type: text/plain < Bad Request* Closing connection #0

    Read the article

  • Learning Basic Mathematics

    - by NeedsToKnow
    I'm going to just come out and say it. I'm 20 and can't do maths. Two years ago I passed the end-of-high-school mathematics exam (but not at school), and did pretty well. Since then, I haven't done a scrap of mathematics. I wondered just how bad I had gotten, so I was looking at some simple algebra problems. You know, the kind you learn halfway through highschool. 5(-3x - 2) - (x - 3) = -4(4x + 5) + 13 Couldn't do them. I've got half a year left until I start a Computer Science undergraduate degree. I love designing and creating programs, and I remember I loved mathematics back when I did it. Basically, I've had a pretty bad education, but I want to be knowledgable in these areas. I was thinking of buying some high school textbooks and reading them, but I'm not sure this is the right way to go. I need to start off at some basic level and work towards a greater understanding. My question is: What should I study, how should I study, and what books can you recommend? Thanks!

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • How does mercurial's bisect work when the range includes branching?

    - by Joshua Goldberg
    If the bisect range includes multiple branches, how does hg bisect's search work. Does it effectively bisect each sub-branch (I would think that would be inefficient)? For instance, borrowing, with gratitude, a diagram from an answer to this related question, what if the bisect got to changeset 7 on the "good" right-side branch first. @ 12:8ae1fff407c8:bad6 | o 11:27edd4ba0a78:bad5 | o 10:312ba3d6eb29:bad4 |\ | o 9:68ae20ea0c02:good33 | | | o 8:916e977fa594:good32 | | | o 7:b9d00094223f:good31 | | o | 6:a7cab1800465:bad3 | | o | 5:a84e45045a29:bad2 | | o | 4:d0a381a67072:bad1 | | o | 3:54349a6276cc:good4 |/ o 2:4588e394e325:good3 | o 1:de79725cb39a:good2 | o 0:2641cc78ce7a:good1 Will it then look only between 7 and 12, missing the real first-bad that we care about? (thus using "dumb" numerical order) or is it smart enough to use the full topography and to know that the first bad could be below 7 on the right-side branch, or could still be anywhere on the left-side branch. The purpose of my question is both (a) just to understand the algorithm better, and (b) to understand whether I can liberally extend my initial bisect range without thinking hard about what branch I go to. I've been in high-branching bisect situations where it kept asking me after every test to extend beyond the next merge, so that the whole procedure was essentially O(n). I'm wondering if I can just throw the first "good" marker way back past some nest of merges without thinking about it much, and whether that would save time and give correct results.

    Read the article

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >