Search Results

Search found 26059 results on 1043 pages for 'spring test'.

Page 459/1043 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • LuaEdit can't find module when Lua files all in the same folder

    - by joverboard
    I downloaded LuaEdit to use as an IDE and debug tool however I'm having trouble using it for even the simplest things. I've created a solution with 2 files in it, all of which are stored in the same folder. My files are as follows: --startup.lua require("foo") test("Testing", "testing", "one, two, three") --foo.lua foo = {} print("In foo.lua") function test(a,b,c) print(a,b,c) end This works fine when in my C++ compiler when accessed through some embed code, however when I attempt to use the same code in LuaEdit, it crashes on line 3 require("foo") with an error stating: module 'foo' not found: no field package.preload['foo'] no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\lua\foo\init.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo\init.lua' no file '.\foo.lua' no file 'C:\Program Files (x86)\LuaEdit 2010\foo.dll' no file 'C:\Program Files (x86)\LuaEdit 2010\loadall.dll' no file '.\battle.dll' I have also tried creating these files prior to adding them to a solution and still get the same error. Is there some setting I'm missing? It would be great to have an IDE/debugger but it's useless to me if it can't run linked functions.

    Read the article

  • Greasemonkey script not executed when unusual content loading is being used

    - by Sam Brightman
    I'm trying to write a Greasemonkey script for Facebook and having some trouble with the funky page/content loading that they do (I don't quite understand this - a lot of the links are actually just changing the GET, but I think they do some kind of server redirect to make the URL look the same to the browser too?). Essentially the only test required is putting a GM_log() on its own in the script. If you click around Facebook, even with facebook.com/* as the pattern, it is often not executed. Is there anything I can do, or is the idea of a "page load" fixed in Greasemonkey, and FB is "tricking" it into not running by using a single URL? If I try to do some basic content manipulation like this: GM.log("starting"); var GM_FB=new Object; GM_FB.birthdays = document.evaluate("//div[@class='UIUpcoming_Item']", document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for (i = GM_FB.birthdays.snapshotLength - 1; i >= 0; i--) { if (GM_FB.birthdayRegex.test(GM_FB.birthdays.snapshotItem(i).innerHTML)) { GM_FB.birthdays.snapshotItem(i).setAttribute('style','font-weight: bold; background: #fffe88'); } } The result is that sometimes only a manual page refresh will make it work. Pulling up the Firebug console and forcing the code to run works fine. Note that this isn't due to late loading of certain parts of the DOM: I have adding some code later to wait for the relevant elements and, crucially, the message never gets logged for certain transitions. For example, when I switch from Messages to News Feed and back.

    Read the article

  • Testing ActionFilterAttributes with MSpec

    - by Tomas Lycken
    I'm currently trying to grasp MSpec, mainly to learn new ways of (T/B)DD to be able to make an educated decision on which technology to use. Previously, I've mostly (read: only) used the built-in MSTest framework with Moq, so BDD is quite new for me. I'm writing an ASP.NET MVC app, and I want to implement PRG. Last time I did this, I used action filters to export and import ModelState via TempData, so that I could return a RedirectResult and the validation errors would still be there when the user got the view. I tested that scenario by verifying two things: a) That the ExportModelStateAttribute I had written was applied (among tests for my controller) b) That the attribute worked (among tests for action filter attributes) However, in BDD I've understood I should be even more concerned with behavior, and even less with implementation. This means I should probably just verify that the model state is in tempdata when the action has finished executing - not necessarily that it's done via an attribute. To further complicate things, attributes are not run when calling the action directly in the test, so I can't just call the action and see if the job's been done. How should I spec/test this in MSpec?

    Read the article

  • Unit Testing-- fundamental goal?

    - by David
    Me and my co-workers had a bit of a disagreement last night about unit testing in our PHP/MySQL application. Half of us argued that when unit testing a function within a class, you should mock everything outside of that class and its parents. The other half of us argued that you SHOULDN'T mock anything that is a direct dependancy of the class either. The specific example was our logging mechanism, which happened through a static Logging class, and we had a number of Logging::log() calls in various locations throughout our application. The first half of us said the Logging mechanism should be faked (mocked) because it would be tested in the Logging unit tests. The second half of us argued that we should include the original Logging class in our unit test so that if we make a change to our logging interface, we'll be able to see if it creates problems in other parts of the application due to failing to update the call interface. So I guess the fundamental question is-- do unit tests serve to test the functionality of a single unit in a closed environment, or show the consequences of changes to a single unit in a larger environment? If it's one of these, how do you accomplish the other?

    Read the article

  • Is there a supplementary guide/answer key for ruby koans?

    - by corroded
    I have recently tried sharpening my rails skills with this tool: http://github.com/edgecase/ruby_koans but I am having trouble passing some tests. Also I am not sure if I'm doing some things correctly since the objective is just to pass the test, there are a lot of ways in passing it and I may be doing something that isn't up to standards. Is there a way to confirm if I'm doing things right? a specific example: in about_nil, def test_nil_is_an_object assert_equal __, nil.is_a?(Object), "Unlike NULL in other languages" end so is it telling me to check if that second clause is equal to an object(so i can say nil is an object) or just put assert_equal true, nil.is_a?(Object) because the statement is true? and the next test: def test_you_dont_get_null_pointer_errors_when_calling_methods_on_nil # What happens when you call a method that doesn't exist. The # following begin/rescue/end code block captures the exception and # make some assertions about it. begin nil.some_method_nil_doesnt_know_about rescue Exception => ex # What exception has been caught? assert_equal __, ex.class # What message was attached to the exception? # (HINT: replace __ with part of the error message.) assert_match(/__/, ex.message) end end Im guessing I should put a "No method error" string in the assert_match, but what about the assert_equal?

    Read the article

  • How to repeatedly show a Dialog with PyGTK / Gtkbuilder?

    - by Julian
    I have created a PyGTK application that shows a Dialog when the user presses a button. The dialog is loaded in my __init__ method with: builder = gtk.Builder() builder.add_from_file("filename") builder.connect_signals(self) self.myDialog = builder.get_object("dialog_name") In the event handler, the dialog is shown with the command self.myDialog.run(), but this only works once, because after run() the dialog is automatically destroyed. If I click the button a second time, the application crashes. I read that there is a way to use show() instead of run() where the dialog is not destroyed, but I feel like this is not the right way for me because I would like the dialog to behave modally and to return control to the code only after the user has closed it. Is there a simple way to repeatedly show a dialog using the run() method using gtkbuilder? I tried reloading the whole dialog using the gtkbuilder, but that did not really seem to work, the dialog was missing all child elements (and I would prefer to have to use the builder only once, at the beginning of the program). [SOLUTION] As pointed out by the answer below, using hide() does the trick. But one has to take care that the dialog is in fact destroyed if one does not catch its "delete-event". A simple example that works is: import pygtk import gtk class DialogTest: def rundialog(self, widget, data=None): self.dia.show_all() result = self.dia.run() def destroy(self, widget, data=None): gtk.main_quit() def closedialog(self, widget, data=None): self.dia.hide() return True def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("destroy", self.destroy) self.dia = gtk.Dialog('TEST DIALOG', self.window, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT) self.dia.vbox.pack_start(gtk.Label('This is just a Test')) self.dia.connect("delete-event", self.closedialog) self.button = gtk.Button("Run Dialog") self.button.connect("clicked", self.rundialog, None) self.window.add(self.button) self.button.show() self.window.show() if __name__ == "__main__": testApp = DialogTest() gtk.main()

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

  • Moving front page (cursebird or foursquare)

    - by Dan Samuels
    Ok so I'll be honest, I have a good amount of experience with php/mysql, I've just started learning jQuery and I've done very little, but some with ajax. So using the terms ajax/jquery interchangeably are a bit confusing to me. Anyway as the title suggest I have a website with 5 items, and I want them to move (meaning, if a more recent one is entered, remove the last item, and put the new one on top), they are 5 of the most recent items in the database table, now I've coded jquery as a test so it fades out the last one, the whole thing moves down, makes room at the top, and fades in a new one. However, it's a test and has 0 interaction with the database, the one that fades in is just in a hidden div. So the jQuery part is taken care of. So I'm unsure how to go about this, I was thinking maybe have ajax check a website off the page that has those 5 items in raw format, and if they change then to refresh? Not looking for a "plz code 4 me" answer, just the concept of how it would work, or some links to get off to the right start. edit - Also, the 5 items are ranked, so if I click item 3 I need it to move above item 2 refreshlessly, so this causes a whole other issue I assume.

    Read the article

  • Same code, not the same place, different results

    - by Tom
    Hi! I'm trying to something pretty simple but I don't understand why the second bit of code is giving me a bad access error... First: - (UITableViewCell *)tableView:tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Tom: Converting the row int to a string and adding 1 to it so that the rows fit with the array indexes.) NSString *key = [NSString stringWithFormat:@"%d", indexPath.row+1]; NSArray *array = [self.tableDataSource objectForKey:key]; NSLog(@"%@", key); NSLog(@"%@", [array description]); cell.textLabel.text = [array objectAtIndex:1]; return cell; } That code gives me exactly what I want: 2010-03-18 15:18:12.884 PremierSoins[28005:40b] 1 2010-03-18 15:18:12.885 PremierSoins[28005:40b] ( 7, Blessures ) 2010-03-18 15:18:12.892 PremierSoins[28005:40b] 2 2010-03-18 15:18:12.893 PremierSoins[28005:40b] ( 18, "Test 2" ) But then, the second: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *key = [NSString stringWithFormat:@"%d", indexPath.row+1]; NSArray *array = [self.tableDataSource objectForKey:key]; NSLog(@"%@", key); NSLog(@"%@", [array description]); } Is only able to get me the key, but the description of the array makes it crash... tableDataSource is a NSMutableDictionary containing multiple arrays... Like this: { 1 = ( 7, Blessures ); 2 = ( 18, "Test 2" ); } Do you have any clue? I've been looking at this since yesterday... Thanks! Tom

    Read the article

  • Preserve HTML font-size when iPhone orientation changes from portrait to landscape

    - by DShultz
    I have a mobile web application with an unordered list containing multiple listitems with a hyperlink inside of each li: <ul> <li id="home" class="active"> <a href="home.html">HOME</a></div> </li> <li id="home" class="active"> <a href="test.html">TEST</a></div> </li> </ul> ...My question is how can I format the hyperlinks so that they DON'T change size when viewed on an iPhone, and the accellerometer switches from portrait - landscape? Right now, I have the hyperlink font size spec'ed to 14px, but when switching to landscape, it blows way up to like 20px. I want the font-size to stay the same. Here is the selector: ul li a { font-size:14px; text-decoration: none; color: #cc9999; }

    Read the article

  • Write raw struct contents (bytes) to a file in C. Confused about actual size written

    - by d11wtq
    Basic question, but I expected this struct to occupy 13 bytes of space (1 for the char, 12 for the 3 unsigned ints). Instead, sizeof(ESPR_REL_HEADER) gives me 16 bytes. typedef struct { unsigned char version; unsigned int root_node_num; unsigned int node_size; unsigned int node_count; } ESPR_REL_HEADER; What I'm trying to do is initialize this struct with some values and write the data it contains (the raw bytes) to the start of a file, so that when I open this file I later I can reconstruct this struct and gain some meta data about what the rest of the file contains. I'm initializing the struct and writing it to the file like this: int esprime_write_btree_header(FILE * fp, unsigned int node_size) { ESPR_REL_HEADER header = { .version = 1, .root_node_num = 0, .node_size = node_size, .node_count = 1 }; return fwrite(&header, sizeof(ESPR_REL_HEADER), 1, fp); } Where node_size is currently 4 while I experiment. The file contains the following data after I write the struct to it: -bash$ hexdump test.dat 0000000 01 bf f9 8b 00 00 00 00 04 00 00 00 01 00 00 00 0000010 I expect it to actually contain: -bash$ hexdump test.dat 0000000 01 00 00 00 00 04 00 00 00 01 00 00 00 0000010 Excuse the newbiness. I am trying to learn :) How do I efficiently write just the data components of my struct to a file?

    Read the article

  • How can I sign a Windows Mobile application for internal use?

    - by AR
    I'm developing a Windows Mobile application for internal company use, using the Windows Mobile 6 Professional SDK. Same old story: I've developed and tested on the emulator and all is well, but as soon as I deploy to advice I get an UnauthorizedAccessException when writing files or creating directories. I'm aware that an application installed to a device needs to be signed but I'm running into roadblocks at every turn: Using the project properties 'Devices' window I select 'Sign the project output with this certificate, and choose one of the sample certificates from the SDK. This results in a build error: "The signer's certificate is not valid for signing" when running SignTool. If I try to run SignTool.exe from the commandline, I get an error telling me to run SignTool.exe from a location in the system's PATH. I can't use the 'Signing' tab in the Project Properties to create a test certificate - this is greyed out (presumably for WinMobile projects?). If at all possible, I would like to avoid having to go through Versign or the like to get a Mobile2Market certificate. If I have to go this route for a final version that's fine, but I need to at least be able to test the app on real devices. Any advice would be most welcome!

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Loose Coupling of Components

    - by David
    I have created a class library (assembly) that provides messaging, email and sms. This class library defines an interface IMessenger which the classes EmailMessage and SmsMessage both implement. I see this is a general library that would be part of my infrastructure layer and would / can be used across any development. Now, in my application layer I have a class that requires to use a messaging component, I obviously want to use the messaging library that I have created. Additionally, I will be using an IoC container (Spring.net) to allow me to inject my implementation i.e. either email or sms. Therefore, I want to program against an interface in my application layer class, do I then need to reference my message class library from my application layer class? Is this tightly coupling my application layer class to my message class library? Should I be defining the interface - IMessenger in a seperate library? Or should I be doing something else?

    Read the article

  • Registry entry using VC++, Data corrupting

    - by sijith
    Hi I want to set system time to registry, i did like this. But some null characters only getting there. when i am giving LPCTSTR data = TEXT("24/3/2010\0"); LONG setRes = RegSetValueEx (hkey, value, 0, REG_SZ, (LPBYTE)data, 100)); thsi is sucessfully adding into registry How to trace the issue IF possible please check my code include include include void Regkey::create_Registry() { HKEY hkey; DWORD dwDisposition,lpData; SYSTEMTIME time; GetLocalTime( &time ); int hour = time.wHour; if (hour 12) hour -= 12; char szData[20]; sprintf (szData, "%02d/%02d/%04d", time.wDay, time.wMonth, time.wYear); if(RegCreateKeyEx(HKEY_LOCAL_MACHINE, TEXT("Software\Sijith\Test"), 0, NULL, 0, 0, NULL, &hkey, &dwDisposition)== ERROR_SUCCESS) { LPCTSTR sk = TEXT("Software\Sijith\Test"); LONG openRes = RegOpenKeyEx(HKEY_LOCAL_MACHINE, sk, 0, KEY_ALL_ACCESS , &hkey); LPCTSTR value = TEXT("CheckSoftwareKey"); LONG setRes = RegSetValueEx (hkey, value, 0, REG_SZ, (CONST BYTE *)szData, sizeof(TCHAR) * (_tcslen(szData) + 1)); RegCloseKey(hkey); } } output value name: CheckSoftwareKey valueData: ?????

    Read the article

  • Implementing a 'Send Feedback' feature in a Java desktop application

    - by William
    I would like to implement a 'Send Feedback' option in a Java desktop application. One which will pop up a box for the user to enter a comment, then send it to us along with a screenshot of the application window. How would be the best way to communicate the data to us? Two obvious solutions spring to mind: Email - I'm thinking that the application would connect to an SMTP server set-up by us, with the username/password somehow hidden in the code. SMTP over SSL for security (not of the data being sent, but of the SMTP username/password). Web service - pretty self explanatory. Which of these would be best, or is there a better alternative?

    Read the article

  • .Net 4.0 Is There a Business Layer "Technology" ?

    - by Ronny
    Hi, I have a theoretical question about the .net framework. As I see it Microsoft gave us bunch of technologies for different layers. We have the ADO.NET and with the more improved Entity Framework for Data Access. And ASP.NET for WEB UI. And even WCF for Facade and SOA. But what in the middle, what do we have for the Business Layer? Is it just Referenced DLLs? How do we deal with the Application Pulling this days? I remember using COM+ 10 yeas ago because the IIS couldn't handle the pressure. Is Spring.Net is the best option available for injection? Thanks, Ronny

    Read the article

  • Sweave can't see a vector if run from a function ?

    - by PaulHurleyuk
    I have a function that sets a vector to a string, copies a Sweave document with a new name and then runs that Sweave. Inside the Sweave document I want to use the vector I set in the function, but it doesn't seem to see it. (Edit: I changed this function to use tempdir(() as suggested by Dirk) I created a sweave file test_sweave.rnw; % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{Sweave} \begin{document} \title{Test Sweave Document} \author{gb02413} \maketitle <<>>= ls() Sys.time() print(paste("The chosen study was ",chstud,sep="")) @ \end{document} and I have this function; onOK <- function(){ chstud<-"test" message(paste("Chosen Study is ",chstud,sep="")) newfile<-paste(chstud,"_report",sep="") mypath<-paste(tempdir(),"\\",sep="") setwd(mypath) message(paste("Copying test_sweave.Rnw to ",paste(mypath,newfile,".Rnw",sep=""),sep="")) file.copy("c:\\local\\test_sweave.Rnw", paste(mypath,newfile,".Rnw",sep=""), overwrite=TRUE) Sweave(paste(mypath,newfile,".Rnw",sep="")) require(tools) texi2dvi(file = paste(mypath,newfile,".tex",sep=""), pdf = TRUE) } If I run the code from the function directly, the resulting file has this output for ls(); > ls() [1] "chstud" "mypath" "newfile" "onOK" However If I call onOK() I get this output; > ls() [1] "onOK" and the print(...chstud...)) function generates an error. I suspect this is an environment problem, but I assumed because the call to Sweave occurs within the onOK function, it would be in the same enviroment, and would see all the objects created within the function. How can I get the Sweave process to see the chstud vector ? Thanks Paul.

    Read the article

  • Are function-local typedefs visible inside C++0x lambdas?

    - by GMan - Save the Unicorns
    I've run into a strange problem. The following simplified code reproduces the problem in MSVC 2010 Beta 2: template <typename T> struct dummy { static T foo(void) { return T(); } }; int main(void) { typedef dummy<bool> dummy_type; auto x = [](void){ bool b = dummy_type::foo(); }; // auto x = [](void){ bool b = dummy<bool>::foo(); }; // works } The typedef I created locally in the function doesn't seem to be visible in the lambda. If I replace the typedef with the actual type, it works as expected. Here are some other test cases: // crashes the compiler, credit to Tarydon int main(void) { struct dummy {}; auto x = [](void){ dummy d; }; } // works as expected int main(void) { typedef int integer; auto x = [](void){ integer i = 0; }; } I don't have g++ 4.5 available to test it, right now. Is this some strange rule in C++0x, or just a bug in the compiler? From the results above, I'm leaning towards bug. Though the crash is definitely a bug. For now, I have filed two bug reports. All code snippets above should compile. The error has to do with using the scope resolution on locally defined scopes. (Spotted by dvide.) And the crash bug has to do with... who knows. :) Update According to the bug reports, they have both been fixed for the next release of Visual Studio 2010.

    Read the article

  • Assign Multiple Custom User Roles to a Custom Post Type

    - by OUHSD Webmaster
    Okay here's the situation.... I'm working on a my business website. There will be a work/portfolio area. "Work" is a custom post type. "Designer" is a custom user role. "Client" is a custom user role. In creating a new "Work" post I would like to be able to select both a "designer" and "Client" to assign to the piece of work, as I would assign an author to a regular ol' post. I tried the method from this answer but it did not work for me. ) I placed it in my functions.php file. ` add_filter('wp_dropdown_users', 'test'); function test($output) { global $post; //Doing it only for the custom post type if($post->post_type == 'work') { $users = get_users(array('role'=>'designer')); //We're forming a new select with our values, you can add an option //with value 1, and text as 'admin' if you want the admin to be listed as well, //optionally you can use a simple string replace trick to insert your options, //if you don't want to override the defaults $output .= "<select id='post_author_override' name='post_author_override' class=''>"; foreach($users as $user) { $output .= "<option value='".$user->id."'>".$user->user_login."</option>"; } $output .= "</select>"; } return $output; } ` Any help would be extremely appreciated!

    Read the article

  • [Ruby] Object assignment and pointers

    - by Jergason
    I am a little confused about object assignment and pointers in Ruby, and coded up this snippet to test my assumptions. class Foo attr_accessor :one, :two def initialize(one, two) @one = one @two = two end end bar = Foo.new(1, 2) beans = bar puts bar puts beans beans.one = 2 puts bar puts beans puts beans.one puts bar.one I had assumed that when I assigned bar to beans, it would create a copy of the object, and modifying one would not affect the other. Alas, the output shows otherwise. ^_^[jergason:~]$ ruby test.rb #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> #<Foo:0x100155c60> 2 2 I believe that the numbers have something to do with the address of the object, and they are the same for both beans and bar, and when I modify beans, bar gets changed as well, which is not what I had expected. It appears that I am only creating a pointer to the object, not a copy of it. What do I need to do to copy the object on assignment, instead of creating a pointer? Tests with the Array class shows some strange behavior as well. foo = [0, 1, 2, 3, 4, 5] baz = foo puts "foo is #{foo}" puts "baz is #{baz}" foo.pop puts "foo is #{foo}" puts "baz is #{baz}" foo += ["a hill of beans is a wonderful thing"] puts "foo is #{foo}" puts "baz is #{baz}" This produces the following wonky output: foo is 012345 baz is 012345 foo is 01234 baz is 01234 foo is 01234a hill of beans is a wonderful thing baz is 01234 This blows my mind. Calling pop on foo affects baz as well, so it isn't a copy, but concatenating something onto foo only affects foo, and not baz. So when am I dealing with the original object, and when am I dealing with a copy? In my own classes, how can I make sure that assignment copies, and doesn't make pointers? Help this confused guy out.

    Read the article

  • Testing InlineFormset clean methods

    - by Rory
    I have a Django project, with 2 models, a Structure and Bracket, the Bracket has a ForeignKey to a Structure (i.e. one-to-many, one Structure has many Brackets). I created a TabularInline for the admin site, so that there would be a table of Brackets on the Structure. I added a custom formset with some a custom clean method to do some extra validation, you can't have a Bracket that conflicts with another Bracket on the same Structure etc. The admin looks like this: class BracketInline(admin.TabularInline): model = Bracket formset = BracketInlineFormset class StructureAdmin(admin.ModelAdmin): inlines = [ BracketInline ] admin.site.register(Structure, StructureAdmin) That all works, and the validation works. However now I want to write some unittest to test my complex formset validation logic. My first attempt to validate known-good values is: data = {'form-TOTAL_FORMS': '1', 'form-INITIAL_FORMS': '0', 'form-MAX_NUM_FORMS': '', 'form-0-field1':'good-value', … } formset = BracketInlineFormset(data) self.assertTrue(formset.is_valid()) However that doesn't work and raises the exception: ====================================================================== ERROR: testValid (appname.tests.StructureTestCase) ---------------------------------------------------------------------- Traceback (most recent call last): File "/paht/to/project/tests.py", line 494, in testValid formset = BracketInlineFormset(data) File "/path/to/django/forms/models.py", line 672, in __init__ self.instance = self.fk.rel.to() AttributeError: 'BracketInlineFormset' object has no attribute 'fk' ---------------------------------------------------------------------- The Django documentation (for formset validation) implies one can do this. How come this isn't working? How do I test the custom clean()/validation for my inline formset?

    Read the article

  • problems with async jquery and loops

    - by Seth Vargo
    I am so confused. I am trying to append portals to a page by looping through an array and calling a method I wrote called addModule(). The method gets called the right number of times (checked via an alert statement), in the correct order, but only one or two of the portals actually populate. I have a feeling its something with the loop and async, but it's easier explained with the code: moduleList = [['weather','test'],['test']]; for(i in moduleList) { $('#content').append(''); for(j in moduleList[i]) { addModule(i,moduleList[i][j]); //column,name } } function addModule(column,name) { alert('adding module ' + name); $.get('/modules/' + name.replace(' ','-') + '.php',function(data){ $('#'+column).append(data); }); } for each array in the main array, I append a new column, since that's what each sub-array is - a column of portals. Then I loop through that sub array and call addModule on that column and the name of that module (which works correctly). Something buggy happens in my addModule method that it only adds the first and last modules, or sometimes a middle one, or sometimes none at all... im so confused!

    Read the article

  • wsdl xml parsing , maxlength problem after encoding of text

    - by MichaelD
    We are working together with another firm. our application communicates with the other application through WCF on our side and a custom implemented java wsdl handler on the other side. They specify the wsdl format and one of the rules is that a specific string cannot contain more then 15 characters. (normally it's 60, but i take 15 for easy example reasons) When we try to send the following string to them we get an error that the string is too long according to the wsdl: "example & test" this is a string of 14 characters, so it should be allowed the microsoft wcf parser translates this to "example &amp; test" . This encoded string is 18 characters long. Now what is the standaard behavior to check a maxlength defined in a message? Is it the encoded message or the decoded message? I would think it's the decoded message , but i ain't sure. If it is the encoded message, how should we handle this so we would know how we have to split the string?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >