Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 395/975 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Is there a supplementary guide/answer key for ruby koans?

    - by corroded
    I have recently tried sharpening my rails skills with this tool: http://github.com/edgecase/ruby_koans but I am having trouble passing some tests. Also I am not sure if I'm doing some things correctly since the objective is just to pass the test, there are a lot of ways in passing it and I may be doing something that isn't up to standards. Is there a way to confirm if I'm doing things right? a specific example: in about_nil, def test_nil_is_an_object assert_equal __, nil.is_a?(Object), "Unlike NULL in other languages" end so is it telling me to check if that second clause is equal to an object(so i can say nil is an object) or just put assert_equal true, nil.is_a?(Object) because the statement is true? and the next test: def test_you_dont_get_null_pointer_errors_when_calling_methods_on_nil # What happens when you call a method that doesn't exist. The # following begin/rescue/end code block captures the exception and # make some assertions about it. begin nil.some_method_nil_doesnt_know_about rescue Exception => ex # What exception has been caught? assert_equal __, ex.class # What message was attached to the exception? # (HINT: replace __ with part of the error message.) assert_match(/__/, ex.message) end end Im guessing I should put a "No method error" string in the assert_match, but what about the assert_equal?

    Read the article

  • Greasemonkey script not executed when unusual content loading is being used

    - by Sam Brightman
    I'm trying to write a Greasemonkey script for Facebook and having some trouble with the funky page/content loading that they do (I don't quite understand this - a lot of the links are actually just changing the GET, but I think they do some kind of server redirect to make the URL look the same to the browser too?). Essentially the only test required is putting a GM_log() on its own in the script. If you click around Facebook, even with facebook.com/* as the pattern, it is often not executed. Is there anything I can do, or is the idea of a "page load" fixed in Greasemonkey, and FB is "tricking" it into not running by using a single URL? If I try to do some basic content manipulation like this: GM.log("starting"); var GM_FB=new Object; GM_FB.birthdays = document.evaluate("//div[@class='UIUpcoming_Item']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for (i = GM_FB.birthdays.snapshotLength - 1; i >= 0; i--) { if (GM_FB.birthdayRegex.test(GM_FB.birthdays.snapshotItem(i).innerHTML)) { GM_FB.birthdays.snapshotItem(i).setAttribute('style','font-weight: bold; background: #fffe88'); } } The result is that sometimes only a manual page refresh will make it work. Pulling up the Firebug console and forcing the code to run works fine. Note that this isn't due to late loading of certain parts of the DOM: I have adding some code later to wait for the relevant elements and, crucially, the message never gets logged for certain transitions. For example, when I switch from Messages to News Feed and back.

    Read the article

  • Write raw struct contents (bytes) to a file in C. Confused about actual size written

    - by d11wtq
    Basic question, but I expected this struct to occupy 13 bytes of space (1 for the char, 12 for the 3 unsigned ints). Instead, sizeof(ESPR_REL_HEADER) gives me 16 bytes. typedef struct { unsigned char version; unsigned int root_node_num; unsigned int node_size; unsigned int node_count; } ESPR_REL_HEADER; What I'm trying to do is initialize this struct with some values and write the data it contains (the raw bytes) to the start of a file, so that when I open this file I later I can reconstruct this struct and gain some meta data about what the rest of the file contains. I'm initializing the struct and writing it to the file like this: int esprime_write_btree_header(FILE * fp, unsigned int node_size) { ESPR_REL_HEADER header = { .version = 1, .root_node_num = 0, .node_size = node_size, .node_count = 1 }; return fwrite(&header, sizeof(ESPR_REL_HEADER), 1, fp); } Where node_size is currently 4 while I experiment. The file contains the following data after I write the struct to it: -bash$ hexdump test.dat 0000000 01 bf f9 8b 00 00 00 00 04 00 00 00 01 00 00 00 0000010 I expect it to actually contain: -bash$ hexdump test.dat 0000000 01 00 00 00 00 04 00 00 00 01 00 00 00 0000010 Excuse the newbiness. I am trying to learn :) How do I efficiently write just the data components of my struct to a file?

    Read the article

  • How to repeatedly show a Dialog with PyGTK / Gtkbuilder?

    - by Julian
    I have created a PyGTK application that shows a Dialog when the user presses a button. The dialog is loaded in my __init__ method with: builder = gtk.Builder() builder.add_from_file("filename") builder.connect_signals(self) self.myDialog = builder.get_object("dialog_name") In the event handler, the dialog is shown with the command self.myDialog.run(), but this only works once, because after run() the dialog is automatically destroyed. If I click the button a second time, the application crashes. I read that there is a way to use show() instead of run() where the dialog is not destroyed, but I feel like this is not the right way for me because I would like the dialog to behave modally and to return control to the code only after the user has closed it. Is there a simple way to repeatedly show a dialog using the run() method using gtkbuilder? I tried reloading the whole dialog using the gtkbuilder, but that did not really seem to work, the dialog was missing all child elements (and I would prefer to have to use the builder only once, at the beginning of the program). [SOLUTION] As pointed out by the answer below, using hide() does the trick. But one has to take care that the dialog is in fact destroyed if one does not catch its "delete-event". A simple example that works is: import pygtk import gtk class DialogTest: def rundialog(self, widget, data=None): self.dia.show_all() result = self.dia.run() def destroy(self, widget, data=None): gtk.main_quit() def closedialog(self, widget, data=None): self.dia.hide() return True def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("destroy", self.destroy) self.dia = gtk.Dialog('TEST DIALOG', self.window, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT) self.dia.vbox.pack_start(gtk.Label('This is just a Test')) self.dia.connect("delete-event", self.closedialog) self.button = gtk.Button("Run Dialog") self.button.connect("clicked", self.rundialog, None) self.window.add(self.button) self.button.show() self.window.show() if __name__ == "__main__": testApp = DialogTest() gtk.main()

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

  • Registry entry using VC++, Data corrupting

    - by sijith
    Hi I want to set system time to registry, i did like this. But some null characters only getting there. when i am giving LPCTSTR data = TEXT("24/3/2010\0"); LONG setRes = RegSetValueEx (hkey, value, 0, REG_SZ, (LPBYTE)data, 100)); thsi is sucessfully adding into registry How to trace the issue IF possible please check my code include include include void Regkey::create_Registry() { HKEY hkey; DWORD dwDisposition,lpData; SYSTEMTIME time; GetLocalTime( &time ); int hour = time.wHour; if (hour 12) hour -= 12; char szData[20]; sprintf (szData, "%02d/%02d/%04d", time.wDay, time.wMonth, time.wYear); if(RegCreateKeyEx(HKEY_LOCAL_MACHINE, TEXT("Software\Sijith\Test"), 0, NULL, 0, 0, NULL, &hkey, &dwDisposition)== ERROR_SUCCESS) { LPCTSTR sk = TEXT("Software\Sijith\Test"); LONG openRes = RegOpenKeyEx(HKEY_LOCAL_MACHINE, sk, 0, KEY_ALL_ACCESS , &hkey); LPCTSTR value = TEXT("CheckSoftwareKey"); LONG setRes = RegSetValueEx (hkey, value, 0, REG_SZ, (CONST BYTE *)szData, sizeof(TCHAR) * (_tcslen(szData) + 1)); RegCloseKey(hkey); } } output value name: CheckSoftwareKey valueData: ?????

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Preserve HTML font-size when iPhone orientation changes from portrait to landscape

    - by DShultz
    I have a mobile web application with an unordered list containing multiple listitems with a hyperlink inside of each li: <ul> <li id="home" class="active"> <a href="home.html">HOME</a></div> </li> <li id="home" class="active"> <a href="test.html">TEST</a></div> </li> </ul> ...My question is how can I format the hyperlinks so that they DON'T change size when viewed on an iPhone, and the accellerometer switches from portrait - landscape? Right now, I have the hyperlink font size spec'ed to 14px, but when switching to landscape, it blows way up to like 20px. I want the font-size to stay the same. Here is the selector: ul li a { font-size:14px; text-decoration: none; color: #cc9999; }

    Read the article

  • Same code, not the same place, different results

    - by Tom
    Hi! I'm trying to something pretty simple but I don't understand why the second bit of code is giving me a bad access error... First: - (UITableViewCell *)tableView:tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Tom: Converting the row int to a string and adding 1 to it so that the rows fit with the array indexes.) NSString *key = [NSString stringWithFormat:@"%d", indexPath.row+1]; NSArray *array = [self.tableDataSource objectForKey:key]; NSLog(@"%@", key); NSLog(@"%@", [array description]); cell.textLabel.text = [array objectAtIndex:1]; return cell; } That code gives me exactly what I want: 2010-03-18 15:18:12.884 PremierSoins[28005:40b] 1 2010-03-18 15:18:12.885 PremierSoins[28005:40b] ( 7, Blessures ) 2010-03-18 15:18:12.892 PremierSoins[28005:40b] 2 2010-03-18 15:18:12.893 PremierSoins[28005:40b] ( 18, "Test 2" ) But then, the second: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *key = [NSString stringWithFormat:@"%d", indexPath.row+1]; NSArray *array = [self.tableDataSource objectForKey:key]; NSLog(@"%@", key); NSLog(@"%@", [array description]); } Is only able to get me the key, but the description of the array makes it crash... tableDataSource is a NSMutableDictionary containing multiple arrays... Like this: { 1 = ( 7, Blessures ); 2 = ( 18, "Test 2" ); } Do you have any clue? I've been looking at this since yesterday... Thanks! Tom

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • How can I sign a Windows Mobile application for internal use?

    - by AR
    I'm developing a Windows Mobile application for internal company use, using the Windows Mobile 6 Professional SDK. Same old story: I've developed and tested on the emulator and all is well, but as soon as I deploy to advice I get an UnauthorizedAccessException when writing files or creating directories. I'm aware that an application installed to a device needs to be signed but I'm running into roadblocks at every turn: Using the project properties 'Devices' window I select 'Sign the project output with this certificate, and choose one of the sample certificates from the SDK. This results in a build error: "The signer's certificate is not valid for signing" when running SignTool. If I try to run SignTool.exe from the commandline, I get an error telling me to run SignTool.exe from a location in the system's PATH. I can't use the 'Signing' tab in the Project Properties to create a test certificate - this is greyed out (presumably for WinMobile projects?). If at all possible, I would like to avoid having to go through Versign or the like to get a Mobile2Market certificate. If I have to go this route for a final version that's fine, but I need to at least be able to test the app on real devices. Any advice would be most welcome!

    Read the article

  • Assign Multiple Custom User Roles to a Custom Post Type

    - by OUHSD Webmaster
    Okay here's the situation.... I'm working on a my business website. There will be a work/portfolio area. "Work" is a custom post type. "Designer" is a custom user role. "Client" is a custom user role. In creating a new "Work" post I would like to be able to select both a "designer" and "Client" to assign to the piece of work, as I would assign an author to a regular ol' post. I tried the method from this answer but it did not work for me. ) I placed it in my functions.php file. ` add_filter('wp_dropdown_users', 'test'); function test($output) { global $post; //Doing it only for the custom post type if($post->post_type == 'work') { $users = get_users(array('role'=>'designer')); //We're forming a new select with our values, you can add an option //with value 1, and text as 'admin' if you want the admin to be listed as well, //optionally you can use a simple string replace trick to insert your options, //if you don't want to override the defaults $output .= "<select id='post_author_override' name='post_author_override' class=''>"; foreach($users as $user) { $output .= "<option value='".$user->id."'>".$user->user_login."</option>"; } $output .= "</select>"; } return $output; } ` Any help would be extremely appreciated!

    Read the article

  • Sweave can't see a vector if run from a function ?

    - by PaulHurleyuk
    I have a function that sets a vector to a string, copies a Sweave document with a new name and then runs that Sweave. Inside the Sweave document I want to use the vector I set in the function, but it doesn't seem to see it. (Edit: I changed this function to use tempdir(() as suggested by Dirk) I created a sweave file test_sweave.rnw; % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{Sweave} \begin{document} \title{Test Sweave Document} \author{gb02413} \maketitle <<>>= ls() Sys.time() print(paste("The chosen study was ",chstud,sep="")) @ \end{document} and I have this function; onOK <- function(){ chstud<-"test" message(paste("Chosen Study is ",chstud,sep="")) newfile<-paste(chstud,"_report",sep="") mypath<-paste(tempdir(),"\\",sep="") setwd(mypath) message(paste("Copying test_sweave.Rnw to ",paste(mypath,newfile,".Rnw",sep=""),sep="")) file.copy("c:\\local\\test_sweave.Rnw", paste(mypath,newfile,".Rnw",sep=""), overwrite=TRUE) Sweave(paste(mypath,newfile,".Rnw",sep="")) require(tools) texi2dvi(file = paste(mypath,newfile,".tex",sep=""), pdf = TRUE) } If I run the code from the function directly, the resulting file has this output for ls(); > ls() [1] "chstud" "mypath" "newfile" "onOK" However If I call onOK() I get this output; > ls() [1] "onOK" and the print(...chstud...)) function generates an error. I suspect this is an environment problem, but I assumed because the call to Sweave occurs within the onOK function, it would be in the same enviroment, and would see all the objects created within the function. How can I get the Sweave process to see the chstud vector ? Thanks Paul.

    Read the article

  • Testing InlineFormset clean methods

    - by Rory
    I have a Django project, with 2 models, a Structure and Bracket, the Bracket has a ForeignKey to a Structure (i.e. one-to-many, one Structure has many Brackets). I created a TabularInline for the admin site, so that there would be a table of Brackets on the Structure. I added a custom formset with some a custom clean method to do some extra validation, you can't have a Bracket that conflicts with another Bracket on the same Structure etc. The admin looks like this: class BracketInline(admin.TabularInline): model = Bracket formset = BracketInlineFormset class StructureAdmin(admin.ModelAdmin): inlines = [ BracketInline ] admin.site.register(Structure, StructureAdmin) That all works, and the validation works. However now I want to write some unittest to test my complex formset validation logic. My first attempt to validate known-good values is: data = {'form-TOTAL_FORMS': '1', 'form-INITIAL_FORMS': '0', 'form-MAX_NUM_FORMS': '', 'form-0-field1':'good-value', … } formset = BracketInlineFormset(data) self.assertTrue(formset.is_valid()) However that doesn't work and raises the exception: ====================================================================== ERROR: testValid (appname.tests.StructureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/paht/to/project/tests.py", line 494, in testValid formset = BracketInlineFormset(data) File "/path/to/django/forms/models.py", line 672, in __init__ self.instance = self.fk.rel.to() AttributeError: 'BracketInlineFormset' object has no attribute 'fk' ---------------------------------------------------------------------- The Django documentation (for formset validation) implies one can do this. How come this isn't working? How do I test the custom clean()/validation for my inline formset?

    Read the article

  • [Ruby] Object assignment and pointers

    - by Jergason
    I am a little confused about object assignment and pointers in Ruby, and coded up this snippet to test my assumptions. class Foo attr_accessor :one, :two def initialize(one, two) @one = one @two = two end end bar = Foo.new(1, 2) beans = bar puts bar puts beans beans.one = 2 puts bar puts beans puts beans.one puts bar.one I had assumed that when I assigned bar to beans, it would create a copy of the object, and modifying one would not affect the other. Alas, the output shows otherwise. ^_^[jergason:~]$ ruby test.rb #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> 2 2 I believe that the numbers have something to do with the address of the object, and they are the same for both beans and bar, and when I modify beans, bar gets changed as well, which is not what I had expected. It appears that I am only creating a pointer to the object, not a copy of it. What do I need to do to copy the object on assignment, instead of creating a pointer? Tests with the Array class shows some strange behavior as well. foo = [0, 1, 2, 3, 4, 5] baz = foo puts "foo is #{foo}" puts "baz is #{baz}" foo.pop puts "foo is #{foo}" puts "baz is #{baz}" foo += ["a hill of beans is a wonderful thing"] puts "foo is #{foo}" puts "baz is #{baz}" This produces the following wonky output: foo is 012345 baz is 012345 foo is 01234 baz is 01234 foo is 01234a hill of beans is a wonderful thing baz is 01234 This blows my mind. Calling pop on foo affects baz as well, so it isn't a copy, but concatenating something onto foo only affects foo, and not baz. So when am I dealing with the original object, and when am I dealing with a copy? In my own classes, how can I make sure that assignment copies, and doesn't make pointers? Help this confused guy out.

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • problems with async jquery and loops

    - by Seth Vargo
    I am so confused. I am trying to append portals to a page by looping through an array and calling a method I wrote called addModule(). The method gets called the right number of times (checked via an alert statement), in the correct order, but only one or two of the portals actually populate. I have a feeling its something with the loop and async, but it's easier explained with the code: moduleList = [['weather','test'],['test']]; for(i in moduleList) { $('#content').append(''); for(j in moduleList[i]) { addModule(i,moduleList[i][j]); //column,name } } function addModule(column,name) { alert('adding module ' + name); $.get('/modules/' + name.replace(' ','-') + '.php',function(data){ $('#'+column).append(data); }); } for each array in the main array, I append a new column, since that's what each sub-array is - a column of portals. Then I loop through that sub array and call addModule on that column and the name of that module (which works correctly). Something buggy happens in my addModule method that it only adds the first and last modules, or sometimes a middle one, or sometimes none at all... im so confused!

    Read the article

  • Excess errors on model from somewhere

    - by gmile
    I have a User model, and use an acts_as_authentic (from authlogic) on it. My User model have 3 validations on username and looks as following: User < ActiveRecord::Base acts_as_authentic validates_presence_of :username validates_length_of :username, :within => 4..40 validates_uniqueness_of :username end I'm writing a test to see my validations in action. Somehow, I get 2 errors instead of one when validating a uniqueness of a name. To see excess error, I do the following test: describe User do before(:each) do @user = Factory.build(:user) end it "should have a username longer then 3 symbols" do @user2 = Factory(:user) @user.username = @user2.username @user.save puts @user.errors.inspect end end I got 2 errors on username: @errors={"username"=>["has already been taken", "has already been taken"]}. Somehow the validation passes two times. I think authlogic causes that, but I don't have a clue on how to avoid that. Another case of problem is when I set username to nil. Somehow I get four validation errors instead of three: @errors={"username"=>["is too short (minimum is 3 characters)", "should use only letters, numbers, spaces, and .-_@ please.", "can't be blank", "is too short (minimum is 4 characters)"]} I think authlogic is one that causes this strange behaviour. But I can't even imagine on how to solve that. Any ideas?

    Read the article

  • JSF dynamic ui:include

    - by Ray
    In my app I have tutor and student as roles of user. And I decide that main page for both will be the same. But menu will be different for tutors and users. I made to .xhtml page tutorMenu.xhtml and student.xhtml. And want in dependecy from role include menu. For whole page I use layout and just in every page change content "content part" in ui:composition. In menu.xhtml <h:body> <ui:composition> <div class="menu_header"> <h2> <h:outputText value="#{msg['menu.title']}" /> </h2> </div> <div class="menu_content"> <?:if test="#{authenticationBean.user.role.roleId eq '2'}"> <ui:include src="/pages/content/body/student/studentMenu.xhtml"/> </?:if> <?:if test= "#{authenticationBean.user.role.roleId eq '1'}"> <ui:include src="/pages/content/body/tutor/tutorMenu.xhtml" /> </?:if> </div> </ui:composition> I know that using jstl my be not better solution but I can't find other. What is the best decision of my problem?

    Read the article

  • wsdl xml parsing , maxlength problem after encoding of text

    - by MichaelD
    We are working together with another firm. our application communicates with the other application through WCF on our side and a custom implemented java wsdl handler on the other side. They specify the wsdl format and one of the rules is that a specific string cannot contain more then 15 characters. (normally it's 60, but i take 15 for easy example reasons) When we try to send the following string to them we get an error that the string is too long according to the wsdl: "example & test" this is a string of 14 characters, so it should be allowed the microsoft wcf parser translates this to "example &amp; test" . This encoded string is 18 characters long. Now what is the standaard behavior to check a maxlength defined in a message? Is it the encoded message or the decoded message? I would think it's the decoded message , but i ain't sure. If it is the encoded message, how should we handle this so we would know how we have to split the string?

    Read the article

  • why is a minus sign prepended to my biginteger?

    - by kyrogue
    package ewa; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.logging.Level; import java.util.logging.Logger; import java.math.BigInteger; /** * * @author Lotus */ public class md5Hash { public static void main(String[] args) throws NoSuchAlgorithmException { String test = "abc"; MessageDigest md = MessageDigest.getInstance("MD5"); try { md.update(test.getBytes("UTF-8")); byte[] result = md.digest(); BigInteger bi = new BigInteger(result); String hex = bi.toString(16); System.out.println("Pringting result"); System.out.println(hex); } catch (UnsupportedEncodingException ex) { Logger.getLogger(md5Hash.class.getName()).log(Level.SEVERE, null, ex); } } } i am testing conversion of byte to hex and when done, the end result has a minus sign on the beginning of the string, why does this happen? i have read the docs and it says it will add a minus sign, however i do not understand it. And will the minus sign affect the hash result? because i am going to implement it to hash password stored on my database

    Read the article

  • Nose2 multiprocess error on Windows7

    - by tt293
    I was looking into nose2 as a way to get around the restrictions of having both xunit output and multiprocessing in nose1.3. However, when always-on is set to False in the [multiprocess] section, I can only get a single process running, while when running with always-on set to True, I get the following error: ---------------------------------------------------------------------- Ran 0 tests in 0.043s OK Traceback (most recent call last): File "C:\dev\testing\Tests\PythonTests\venv\Scripts\nose2-script.py", line 8, in <module> load_entry_point('nose2==0.4.7', 'console_scripts', 'nose2')() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 284, in discover return main(*args, **kwargs) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 98, in __init__ super(PluggableTestProgram, self).__init__(**kw) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\unittest2-0.5.1- py2.7.egg\unittest2\main.py", line 98, in __init__ self.runTests() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 260, in runTests self.result = runner.run(self.test) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\runner.py", line 53, in run executor(test, result) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\plugins\mp.py", line 60, in _runmp ready, _, _ = select.select(rdrs, [], [], self.testRunTimeout) select.error: (10038, 'An operation was attempted on something that is not a soc ket') This is running python 2.7.5 (32bit) on Windows 7 in a virtualenv with six-1.1.0, unittest2-0.5.1 and nose2-0.4.7 (I get the same behavior outside of the venv, so I don't think that is the issue here).

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >