Search Results

Search found 6580 results on 264 pages for 'require'.

Page 211/264 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • MVC3 custom validation attribute for an "at least one is required" situation

    - by Pricey
    Hi I have found this answer already: MVC3 Validation - Require One From Group Which is fairly specific to the checking of group names and uses reflection. My example is probably a bit simpler and I was just wondering if there was a simpler way to do it. I have the below: public class TimeInMinutesViewModel { private const short MINUTES_OR_SECONDS_MULTIPLIER = 60; //public string Label { get; set; } [Range(0,24, ErrorMessage = "Hours should be from 0 to 24")] public short Hours { get; set; } [Range(0,59, ErrorMessage = "Minutes should be from 0 to 59")] public short Minutes { get; set; } /// <summary> /// /// </summary> /// <returns></returns> public short TimeInMinutes() { // total minutes should not be negative if (Hours <= 0 && Minutes <= 0) { return 0; } // multiplier operater treats the right hand side as an int not a short int // so I am casting the result to a short even though both properties are already short int return (short)((Hours * MINUTES_OR_SECONDS_MULTIPLIER) + (Minutes * MINUTES_OR_SECONDS_MULTIPLIER)); } } I want to add a validation attribute either to the Hours & Minutes properties or the class itself.. and the idea is to just make sure at least 1 of these properties (Hours OR minutes) has a value, server and client side validation using a custom validation attribute. Does anyone have an example of this please? Thanks

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • PHP Facebook Cronjob with offline access

    - by Mohamed Salem
    1:the code to greet the user, ask for his permission and store his session data so that we can use a cronjob with his session data afterwards. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "password"; $db_name = "databasename"; #go to line 85, the script actually starts there mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); #you have to create a database to store session values. #if you do not know what columns there should be look at line 76 to see column names. #make them all varchars # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => '121036530138', 'secret' => '9bbec378147064', 'cookie' => false,)); # Lets set up the permissions we need and set the login url in case we need it. $par['req_perms'] = "friends_about_me,friends_education_history,friends_likes, friends_interests,friends_location,friends_religion_politics, friends_work_history,publish_stream,friends_activities, friends_events, friends_hometown,friends_location ,user_interests,user_likes,user_events, user_about_me,user_status,user_work_history,read_requests, read_stream,offline_access,user_religion_politics,email,user_groups"; $loginUrl = $facebook->getLoginUrl($par); function save_session($session){ global $facebook; # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$session['uid']); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a stored session, but is it valid? echo " We have a session, but is it valid?"; try { $attachment = array('access_token' => $session_id[0]); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " our old session is not valid, let's delete saved invalid session data "; $res = mysql_query("delete from facebook_user WHERE uid =".$session['uid']); #save new good session #to see what is our session data: print_r($session); if (is_array($session)) { $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','" . $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } # this should never ever happen echo " Something is terribly wrong: Our old session was bad, and now we cannot get the new session"; return; } echo " Our old stored session is valid "; return $session_id[0]; } else { echo " no stored session, this means the user never subscribed to our application before. "; # let's store the session $session = $facebook->getSession(); if (is_array($session)) { # Yes we have a session! so lets store it! $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','". $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } } } #this is the first meaningful line of this script. $session = $facebook->getSession(); # Is the user already subscribed to our application? if ( is_null($session) ) { # no he is not #send him to permissions page header( "Location: $loginUrl" ); } else { #yes, he is already subscribed, or subscribed just now #in case he just subscribed now, save his session information $access_token=save_session($session); echo " everything is ok"; # write your code here to do something afterwards } ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/indexx.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Fatal error: Call to undefined method Facebook::getSession() in /home/content/28/9687528/html/ss/src/indexx.php on line 86 2:A cronjob template that reads the stored session of a user from database, uses his session data to work on his behalf, like reading status posts or publishing posts etc. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "pass"; $db_name = "database"; # Lets connect to the Database and set up the table $link = mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => 'appid', 'secret' => 'secret', 'cookie' => false, )); function get_check_session($uidCheck){ global $facebook; # This function basically checks for a stored session and if we have one it returns it # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$uidCheck); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a session # but, is it valid? try { $attachment = array('access_token' => $session_id[0],); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " User ".$uidCheck." removed the application, or there is some other access problem. "; # let's delete stored data $res = mysql_query("delete from facebook_user where WHERE uid =".$uidCheck); return; } return $session_id[0]; } else { # "no stored session"; echo " error:newsFeedcrontab.php No stored sessions. This should not have happened "; } } # get all users that have given us offline access $users = getUsers(); foreach($users as $user){ # now for each user, check if they are still subscribed to our application echo " Checking user".$user; $access_token=get_check_session($user); # If we've not got an access_token we actually need to login. # but in the crontab, we just log the error, there is no way we can find the user to give us permission here. if ( is_null($access_token) ) { echo " error: newsFeedcrontab.php There is no access token for the user ".$user." "; } else { #we are going to read the newsfeed of user. There are user's friends' posts in this newsfeed try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/home', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get feed of ".$user.$e; } #do something with the result here #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "home" link under connections #we can also read the home of user. Home is the wall of the user who has given us offline access. try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/feed', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get wall of ".$user.$e; } #do something with the result here # #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "feed" link under connections } } function getUsers(){ $sql = "SELECT distinct(uid) from facebook_user Where 1"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)){ $rows [] = $row['uid']; } print_r($rows); return $rows; } mysql_close($link); ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/cron.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/content/28/9687528/html/ss/src/cron.php on line 110 Warning: Invalid argument supplied for foreach() in /home/content/28/9687528/html/ss/src/cron.php on line 64

    Read the article

  • Retrieve values from multimdimensional array

    - by vincentlerry
    I have a great difficulty. I need to retrieve [title], [url] and [abstract] values ??from this multidimensional array. Also, I have to store those values in mysql database. thanks in advance!!! Array ( [bossresponse] = Array ( [responsecode] = 200 [limitedweb] = Array ( [start] = 0 [count] = 20 [totalresults] = 972000 [results] = Array ( [0] = Array ( [date] = [clickurl] = http://www.torchlake.com/ [url] = http://www.torchlake.com/ [dispurl] = www.torchlake.com [title] = Torch Lake, COLI Inc, Highspeed, Dial-up, Wireless ... [abstract] = Welcome to COLI Inc. Chain O' Lake Internet. Local Northern Michigan ISP, offering Dialup Internet access, Wireless access, Web design, and T1 services in Northern ... ) [1] = Array ( [date] = [clickurl] = http://en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [url] = http://en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [dispurl] = en.wikipedia.org/wiki/Torch_Lake_(Antrim_County,_Michigan) [title] = Torch Lake (Antrim County, Michigan) - Wikipedia, the free ... [abstract] = Torch Lake at 19 miles (31 km) long is Michigan's longest lake and at approximately 18,770 acres (76 km²) is Michigan's second largest lake. Within it are several ... ) this is the entire code that generates this array: require("OAuth.php"); $cc_key = ""; $cc_secret = ""; $url = "http://yboss.yahooapis.com/ysearch/limitedweb"; $args = array(); $args["q"] = "car"; $args["format"] = "json"; $args["count"] = 20; $consumer = new OAuthConsumer($cc_key, $cc_secret); $request = OAuthRequest::from_consumer_and_token($consumer, NULL,"GET", $url, $args); $request-sign_request(new OAuthSignatureMethod_HMAC_SHA1(), $consumer, NULL); $url = sprintf("%s?%s", $url, OAuthUtil::build_http_query($args)); $ch = curl_init(); $headers = array($request-to_header()); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $rsp = curl_exec($ch); $results = json_decode($rsp, true);

    Read the article

  • Write a C++ program to encrypt and decrypt certain codes.

    - by Amber
    Step 1: Write a function int GetText(char[],int); which fills a character array from a requested file. That is, the function should prompt the user to input the filename, and then read up to the number of characters given as the second argument, terminating when the number has been reached or when the end of file is encountered. The file should then be closed. The number of characters placed in the array is then returned as the value of the function. Every character in the file should be transferred to the array. Whitespace should not be removed. When testing, assume that no more than 5000 characters will be read. The function should be placed in a file called coding.cpp while the main will be in ass5.cpp. To enable the prototypes to be accessible, the file coding.h contains the prototypes for all the functions that are to be written in coding.cpp for this assignment. (You may write other functions. If they are called from any of the functions in coding.h, they must appear in coding.cpp where their prototypes should also appear. Do not alter coding.h. Any other functions written for this assignment should be placed, along with their prototypes, with the main function.) Step 2: Write a function int SimplifyText(char[],int); which simplifies the text in the first argument, an array containing the number of characters as given in the second argument, by converting all alphabetic characters to lower case, removing all non-alpha characters, and replacing multiple whitespace by one blank. Any leading whitespace at the beginning of the array should be removed completely. The resulting number of characters should be returned as the value of the function. Note that another array cannot appear in the function (as the file does not contain one). For example, if the array contained the 29 characters "The 39 Steps" by John Buchan (with the " appearing in the array), the simplified text would be the steps by john buchan of length 24. The array should not contain a null character at the end. Step 3: Using the file test.txt, test your program so far. You will need to write a function void PrintText(const char[],int,int); that prints out the contents of the array, whose length is the second argument, breaking the lines to exactly the number of characters in the third argument. Be warned that, if the array contains newlines (as it would when read from a file), lines will be broken earlier than the specified length. Step 4: Write a function void Caesar(const char[],int,char[],int); which takes the first argument array, with length given by the second argument and codes it into the third argument array, using the shift given in the fourth argument. The shift must be performed cyclicly and must also be able to handle negative shifts. Shifts exceeding 26 can be reduced by modulo arithmetic. (Is C++'s modulo operations on negative numbers a problem here?) Demonstrate that the test file, as simplified, can be coded and decoded using a given shift by listing the original input text, the simplified text (indicating the new length), the coded text and finally the decoded text. Step 5: The permutation cypher does not limit the character substitution to just a shift. In fact, each of the 26 characters is coded to one of the others in an arbitrary way. So, for example, a might become f, b become q, c become d, but a letter never remains the same. How the letters are rearranged can be specified using a seed to the random number generator. The code can then be decoded, if the decoder has the same random number generator and knows the seed. Write the function void Permute(const char[],int,char[],unsigned long); with the same first three arguments as Caesar above, with the fourth argument being the seed. The function will have to make up a permutation table as follows: To find what a is coded as, generate a random number from 1 to 25. Add that to a to get the coded letter. Mark that letter as used. For b, generate 1 to 24, then step that many letters after b, ignoring the used letter if encountered. For c, generate 1 to 23, ignoring a or b's codes if encountered. Wrap around at z. Here's an example, for only the 6 letters a, b, c, d, e, f. For the letter a, generate, from 1-5, a 2. Then a - c. c is marked as used. For the letter b, generate, from 1-4, a 3. So count 3 from b, skipping c (since it is marked as used) yielding the coding of b - f. Mark f as used. For c, generate, from 1-3, a 3. So count 3 from c, skipping f, giving a. Note the wrap at the last letter back to the first. And so on, yielding a - c b - f c - a d - b (it got a 2) e - d f - e Thus, for a given seed, a translation table is required. To decode a piece of text, we need the table generated to be re-arranged so that the right hand column is in order. In fact you can just store the table in the reverse way (e.g., if a gets encoded to c, put a opposite c is the table). Write a function called void DePermute(const char[],int,char[], unsigned long); to reverse the permutation cypher. Again, test your functions using the test file. At this point, any main program used to test these functions will not be required as part of the assignment. The remainder of the assignment uses some of these functions, and needs its own main function. When submitted, all the above functions will be tested by the marker's own main function. Step 6: If the seed number is unknown, decoding is difficult. Write a main program which: (i) reads in a piece of text using GetText; (ii) simplifies the text using SimplifyText; (iii) prints the text using PrintText; (iv) requests two letters to swap. If we think 'a' in the text should be 'q' we would type aq as input. The text would be modified by swapping the a's and q's, and the text reprinted. Repeat this last step until the user considers the text is decoded, when the input of the same letter twice (requesting a letter to be swapped with itself) terminates the program. Step 7: If we have a large enough sample of coded text, we can use knowledge of English to aid in finding the permutation. The first clue is in the frequency of occurrence of each letter. Write a function void LetterFreq(const char[],int,freq[]); which takes the piece of text given as the first two arguments (same as above) and returns in the 26 long array of structs (the third argument), the table of the frequency of the 26 letters. This frequency table should be in decreasing order of popularity. A simple Selection Sort will suffice. (This will be described in lectures.) When printed, this summary would look something like v x r s z j p t n c l h u o i b w d g e a q y k f m 168106 68 66 59 54 48 45 44 35 26 24 22 20 20 20 17 13 12 12 4 4 1 0 0 0 The formatting will require the use of input/output manipulators. See the header file for the definition of the struct called freq. Modify the program so that, before each swap is requested, the current frequency of the letters is printed. This does not require further calls to LetterFreq, however. You may use the traditional order of regular letter frequencies (E T A I O N S H R D L U) as a guide when deciding what characters to exchange. Step 8: The decoding process can be made more difficult if blank is also coded. That is, consider the alphabet to be 27 letters. Rewrite LetterFreq and your main program to handle blank as another character to code. In the above frequency order, space usually comes first.

    Read the article

  • Spawning vim from a node git hook

    - by Lawrence Jones
    I've got a project purely in coffeescript, with git hooks for deployment also written in cs. I don't really want to break away from the language just to use bash for a quick commit message formatter, but I've got a problem spawning vim from the commit-msg hook. I've seen here that when piping to vim, the stdio is not necessarily set correctly to the tty streams. I get how that could cause a problem, but I don't exactly know how to get vim to load correctly using nodes spawn command. At the moment I have... vim = (require 'child_process').spawn('vim', [file], stdio: 'inherit') vim.on 'exit', (err) -> console.log "Exited! [#{err}]" cb?() ...which works fine to spawn a vim process that can r/w from the parents stdio, but when I use this in the hook things go wrong. Vim states that the stdio is not from terminal, and then once opened typing causes escape characters to pop up all over the place. Backspace for example, will produce ^?. Any help would be appreciated!

    Read the article

  • An online php debugger/code editor

    - by Zirak
    It's a simple deal: I'm sometimes in places where I don't have my laptop, and find myself with spare time and an idea for a project. But unfortunately, I can't do anything about it. I tried a variety of solutions, which include running IDEs (like phpstorm or Aptana) on a disc-on-key or cd (very slow and unappealing), trying several online solutions (like http://phpanywhere.net) and found that all of them are either buggy, overloaded or underloaded with features, just difficult to use, require FTP etc etc. All that is required here is a syntax highlighting and debugging alerts; no actual running of code. So the question is split into two: 1)Do you know of a good online php editor that you've used and enjoyed? 2)If no, then how would you go about making one? The second one seems a bit general, so I'll try and expand...It might be a good idea; if you can't find one, make one. The question is about the concept of making a syntax highlighter (shouldn't be too difficult), and the difficult part of catching php errors WITHOUT executing any php code. Thank you in advance.

    Read the article

  • BCrypt says long, similar passwords are equivalent - problem with me, the gem, or the field of crypt

    - by PreciousBodilyFluids
    I've been experimenting with BCrypt, and found the following. If it matters, I'm running ruby 1.9.2dev (2010-04-30 trunk 27557) [i686-linux] require 'bcrypt' # bcrypt-ruby gem, version 2.1.2 @long_string_1 = 'f287ed6548e91475d06688b481ae8612fa060b2d402fdde8f79b7d0181d6a27d8feede46b833ecd9633b10824259ebac13b077efb7c24563fce0000670834215' @long_string_2 = 'f6ebeea9b99bcae4340670360674482773a12fd5ef5e94c7db0a42800813d2587063b70660294736fded10217d80ce7d3b27c568a1237e2ca1fecbf40be5eab8' def salted(string) @long_string_1 + string + @long_string_2 end encrypted_password = BCrypt::Password.create(salted('password'), :cost => 10) puts encrypted_password #=> $2a$10$kNMF/ku6VEAfLFEZKJ.ZC.zcMYUzvOQ6Dzi6ZX1UIVPUh5zr53yEu password = BCrypt::Password.new(encrypted_password) puts password.is_password?(salted('password')) #=> true puts password.is_password?(salted('passward')) #=> true puts password.is_password?(salted('75747373')) #=> true puts password.is_password?(salted('passwor')) #=> false At first I thought that once the passwords got to a certain length, the dissimilarities would just be lost in all the hashing, and only if they were very dissimilar (i.e. a different length) would they be recognized as different. That didn't seem very plausible to me, from what I know of hash functions, but I didn't see a better explanation. Then, I tried shortening each of the long_strings to see where BCrypt would start being able to tell them apart, and I found that if I shortened each of the long strings to 100 characters or so, the final attempt ('passwor') would start returning true as well. So now I don't know what to think. What's the explanation for this?

    Read the article

  • Optimizing a bin-placement algorithm

    - by user258651
    Alright, I've got two collections, and I need to place elements from collection1 into the bins (elements) of collection2, based on whether their value falls within a given bin's range. For a concrete example, assume I have a sorted collection objects (bins) which have an int range ([1...4], [5..10], etc). I need to determine the range an int falls in, and place it in the appropriate bin. foreach(element n in collection1) { foreach(bin m in collection2) { if (m.inRange(n)) { m.add(n); break; } } } So the obvious NxM complexity algorithm is there, but I really would like to see Nxlog(M). To do this I'd like to use BinarySearch in place of the inner foreach loop. To use BinarySearch, I need to implement an IComparer class to do the searching for me. The problem I'm running into is this approach would require me to make an IComparer.Compare function that compares two different types of objects (an element to its bin), and that doesn't seem possible or correct. So I'm asking, how should I write this algorithm?

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • C++: calling non-member functions with the same syntax of member ones

    - by peoro
    One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions: class A { }; void f( A & this ) { /* ... */ } // ... A a; a.f(); // this is the same as f(a); Of course this could only work as long as f is not virtual (since it cannot appear in A's virtual table. f doesn't need to access A's non-public members. f doesn't conflict with a function declared in A (A::f). I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits: calling str.strip() on a std::string (where strip is a function defined by the user) would sound a lot better than calling strip( str );. most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1). My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)? Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff.

    Read the article

  • 2D Histogram in R: Converting from Count to Frequency within a Column

    - by Jac
    Would appreciate help with generating a 2D histogram of frequencies, where frequencies are calculated within a column. My main issue: converting from counts to column based frequency. Here's my starting code: # expected packages library(ggplot2) library(plyr) # generate example data corresponding to expected data input x_data = sample(101:200,10000, replace = TRUE) y_data = sample(1:100,10000, replace = TRUE) my_set = data.frame(x_data,y_data) # define x and y interval cut points x_seq = seq(100,200,10) y_seq = seq(0,100,10) # label samples as belonging within x and y intervals my_set$x_interval = cut(my_set$x_data,x_seq) my_set$y_interval = cut(my_set$y_data,y_seq) # determine count for each x,y block xy_df = ddply(my_set, c("x_interval","y_interval"),"nrow") # still need to convert for use with dplyr # convert from count to frequency based on formula: freq = count/sum(count in given x interval) ################ TRYING TO FIGURE OUT ################# # plot results fig_count <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = nrow)) # count fig_freq <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = freq)) # frequency I would appreciate any help in how to calculate the frequency within a column. Thanks! jac EDIT: I think the solution will require the following steps 1) Calculate and store overall counts for each x-interval factor 2) Divide the individual bin count by its corresponding x-interval factor count to obtain frequency. Not sure how to carry this out though. .

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • Dangers when deploying Flash/Flex UI test automation hooks to production?

    - by Merlyn Morgan-Graham
    I am interested in doing automated testing against a Flex based UI. I have found out that my best options for UI automation (due to being C# controllable, good licensing conditions, etc) all seem to require that I compile test hooks into my application. Because of this, I am thinking of recommending that these hooks be compiled into our build. I have found a few places on the net that recommend not deploying bits with this instrumentation enabled, and I'd like to know why. Is it a performance drain, or a security risk? If it is a security risk, can you explain how the attack surface is increased? I am not a Flash or Flex developer, though I have some experience with threat modeling. For reference, here's the tools I'm specifically considering: QTP Selenium-Flex API I am having problems finding all the warnings/suggestions I found last night, but here's an example that I can find: http://www.riatest.com/products/getting-started.html Warning! Automation enabled applications expose all properties of all GUI components. This makes them vulnerable to malicious use. Never make automation enabled application publicly available. Always restrict access to such applications and to RIATest Loader to trusted users only. Related question (how to do conditional compilation to insert/remove those hooks): Conditionally including Flex libraries (SWCs) in mxmlc/compc ant tasks

    Read the article

  • How to get a list of all Subversion commit author usernames?

    - by Quinn Taylor
    I'm looking for an efficient way to get the list of unique commit authors for an SVN repository as a whole, or for a given resource path. I haven't been able to find an SVN command specifically for this (and don't expect one) but I'm hoping there may be a better way that what I've tried so far in Terminal (on OS X): svn log --quiet | grep "^r" | awk '{print $3}' svn log --quiet --xml | grep author | sed -E "s:</?author>::g" Either of these will give me one author name per line, but they both require filtering out a fair amount of extra information. They also don't handle duplicates of the same author name, so for lots of commits by few authors, there's tons of redundancy flowing over the wire. More often than not I just want to see the unique author usernames. (It actually might be handy to infer the commit count for each author on occasion, but even in these cases it would be better if the aggregated data were sent across instead.) I'm generally working with client-only access, so svnadmin commands are less useful, but if necessary, I might be able to ask a special favor of the repository admin if strictly necessary or much more efficient. The repositories I'm working with have tens of thousands of commits and many active users, and I don't want to inconvenience anyone.

    Read the article

  • Why does Hibernate 2nd level cache only cache queries within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session within the cache. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy = true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • How to make HTML layout whitespace-agnostic?

    - by ssg
    If you have consecutive inline-blocks white-space becomes significant. It adds some level of space between elements. What's the "correct" way of avoiding whitespace effect to HTML layout if you want those blocks to look stuck to each other? Example: <span>a</span> <span>b</span> This renders differently than: <span>a</span><span>b</span> because of the space inbetween. I want whitespace-effect to go away without compromising HTML source code layout. I want my HTML templates to stay clean and well-indented. I think these options are ugly: 1) Tweaking text-indent, margin, padding etc. (Because it would be dependent on font-size, default white-space width etc) 2) Putting everything on a single line, next to each other. 3) Zero font-size. That would require overriding font-size in blocks, which would otherwise be inherited. 4) Possible document-wide solutions. I want the solution to stay local for a certain block of HTML. Any ideas, any obvious points which I'm missing?

    Read the article

  • Qt cross thread call

    - by QLatvia
    I have a Qt/C++ application, with the usual GUI thread, and a network thread. The network thread is using an external library, which has its own select() based event loop... so the network thread isn't using Qt's event system. At the moment, the network thread just emit()s signals when various events occur, such as a successful connection. I think this works okay, as the signals/slots mechanism posts the signals correctly for the GUI thread. Now, I need for the network thread to be able to call the GUI thread to ask questions. For example, the network thread may require the GUI thread to request put up a dialog, to request a password. Does anyone know a suitable mechanism for doing this? My current best idea is to have the network thread wait using a QWaitCondition, after emitting an object (emit passwordRequestedEvent(passwordRequest);. The passwordRequest object would have a handle on the particular QWaitCondition, and so can signal it when a decision has been made.. Is this sort of thing sensible? or is there another option?

    Read the article

  • PHP Commercial Project Function define

    - by Shiro
    Currently I am working with a commercial project with PHP. I think this question not really apply to PHP for all programming language, just want to discuss how your guys solve it. I work in MVC framework (CodeIgniter). all the database transaction code in model class. Previously, I seperate different search criteria with different function name. Just an example function get_student_detail_by_ID($id){} function get_student_detail_by_name($name){} as you can see the function actually can merge to one, just add a parameter for it. But something you are rushing with project, you won't look back previously got what similar function just make some changes can meet the goal. In this case, we found that there is a lot function there and hard to maintenance. Recently, we try to group the entity to one ultimate search something like this function get_ResList($is_row_count=FALSE, $record_start=0, $arr_search_criteria='', $paging_limit=20, $orderby='name', $sortdir='ASC') we try to make this function to fit all the searching criteria. However, our system getting bigger and bigger, the search criteria not more 1-2 tables. It require join with other table with different purpose. What we had done is using IF ELSE, if(bla bla bla) { $sql_join = JOIN_SOME_TABLE; $sql_where = CONDITION; } at the end, we found that very hard to maintance the function. it is very hard to debug as well. I would like to ask your opinion, what is the commercial solution they solve this kind of issue, how to define a function and how to revise it. I think this is link project management skill. Hope you willing to share with us. Thanks.

    Read the article

  • SVN 409 conflict on commits and updates

    - by bhefny
    We have been using SVN for the past year now and when we migrated to an online server we started getting this error: Commit: Commit failed (details follow): File or directory 'x.php' is out of date; try updating resource out of date; try updating CHECKOUT of '/!svn/ver/491/x.php': 409 Conflict (http://svn.example.com) We are currently using SmartSVN 6.5 and we have also tested with RapidSVN & Syncro (but we can't use tortoise as we have a lot of Ubunutu users) at the begining I though this How do you fix an SVN 409 Conflict Error would help, but it didn't we are still facing the same error and it's even more absurd now. the main problem is that after you get the error, you can't shake it of. Updating doesn't solve, reverting doesn't solve. You are just stuck with the error. The only thing that could work is removing the file from SVN and adding your version but that would be against why we are using SVN in the first place This is our apache config (and yes autoversioning is ON) <Location /> DAV svn SVNPath /home/example/svn SVNAutoversioning on AuthType Basic AuthName "Access Restricted" AuthUserFile /home/example/svn-auth-file Require valid-user </Location> <Directory /> <Files ~ "^\.ht"> Order allow,deny Allow from all Satisfy All </Files> <Files ~ "^error_log"> Order allow,deny Allow from all Satisfy All </Files> </Directory> And here are some observation: We don't receive conflicts anymore, we just get this 409 conflict you can somehow avoid the error if you always update before committing When committing a modified file + a newly added file, you get the error. As if the added file incremented the version by one and then you are committing another file with a older version. Please advise, we are about to go insane

    Read the article

  • Do variable references (alias) incure runtime costs in c++?

    - by cheshirekow
    Maybe this is a compiler specific thing. If so, how about for gcc (g++)? If you use a variable reference/alias like this: int x = 5; int& y = x; y += 10; Does it actually require more cycles than if we didn't use the reference. int x = 5; x += 10; In other words, does the machine code change, or does the "alias" happen only at the compiler level? This may seem like a dumb question, but I am curious. Especially in the case where maybe it would be convenient to temporarily rename some member variables just so that the math code is a little easier to read. Sure, we're not exactly talking about a bottleneck here... but it's something that I'm doing and so I'm just wondering if there is any 'actual' difference... or if it's only cosmetic.

    Read the article

  • Accept All Cookies via HttpClient

    - by Vinay
    So this is currently how my app is set up: 1.) Login Activity. 2.) Once logged in, other activities may be fired up that use PHP scripts that require the cookies sent from logging in. I am using one HttpClient across my app to ensure that the same cookies are used, but my problem is that I am getting 2 of the 3 cookies rejected. I do not care about the validity of the cookies, but I do need them to be accepted. I tried setting the CookiePolicy, but that hasn't worked either. This is what logcat is saying: 11-26 10:33:57.613: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0] [name: cookie_user_id][value: 1][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php" 11-26 10:33:57.593: WARN/ResponseProcessCookies(271): Cookie rejected: "[version: 0][name: cookie_session_id][value: 1985208971][domain: www.trackallthethings.com][path: trackallthethings][expiry: Sun Nov 25 11:33:00 CST 2012]". Illegal path attribute "trackallthethings". Path of origin: "/mobile-api/login.php" I am sure that my actual code is correct (my app still logs in correctly, just doesn't accept the aforementioned cookies), but here it is anyway: HttpGet httpget = new HttpGet(//MY URL); HttpResponse response; response = Main.httpclient.execute(httpget); HttpEntity entity = response.getEntity(); InputStream in = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(in)); StringBuilder sb = new StringBuilder(); From here I use the StringBuilder to simply get the String of the response. Nothing fancy. I understand that the reason my cookies are being rejected is because of an "Illegal path attribute" (I am running a script at /mobile-api/login.php whereas the cookie will return with a path of just "/" for trackallthethings), but I would like to accept the cookies anyhow. Is there a way to do this?

    Read the article

  • Rack, FastCGI, Lighttpd configuration

    - by zacsek
    Hi! I want to run a simple application using Rack, FastCGI and Lighttpd, but I cannot get it working. I get the following error: /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `initialize': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `new' from /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `run' from /www/test.rb:7 Here is the application: #!/usr/bin/ruby app = Proc.new do |env| [200, {'Content-Type' => 'text/plain'}, "Hello World!"] end require 'rack' Rack::Handler::FastCGI.run app, :Port => 4000 ... and the lighttpd.conf: server.modules += ( "mod_access", "mod_accesslog", "mod_fastcgi" ) server.port = 80 server.document-root = "/www" mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) index-file.names = ( "test.rb" ) fastcgi.debug = 1 fastcgi.server = ( ".rb" => (( "host" => "127.0.0.1", "port" => 4000, "bin-path" => "/www/test.rb", "check-local" => "disable", "max-procs" => 1 )) ) Can someone help me? What am I doing wrong?

    Read the article

  • Hashtable is that fast

    - by Costa
    Hi s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1]. Is the hash function of the java string, I assume the rest of languages is similar or close to this implementation. If we have hash-Table and a list of 50 elements. each element is 7 chars ABCDEF1, ABCDEF2, ABCDEF3..... ABCDEFn If each bucket of hashtable contains 5 strings (I think this function will make it one string per bucket, but let us assume it is 5). If we call col.Contains("ABCDEFn"); // will do 6 comparisons and discover the difference on the 7th. The hash-table will take around 70 operations (multiplication and additions) to get the hashcode and to compare with 5 strings in bucket. and BANG it found. For list it will take around 300 comparisons to find it. for the case that there is only 10 elements, the list will take around 70 operations but the Hashtable will take around 50 operations. and note that hashtable operations are more time consuming (it is multiplications). I conclude that HybirdDictionary in .Net probably is the best choice for that most cases that require Hashtable with unknown size, because it will let me use a list till the list becomes more than 10 elements. still need something like HashSet rather than a Dictionary of keys and values, I wonder why there is no HybirdSet!! So what do u think? Thanks

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >