Search Results

Search found 337 results on 14 pages for 'joseph earl'.

Page 8/14 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Rails, if instance is in a scope?

    - by Joseph Silvashy
    I using rails 3 and I can't seem to check if a given instance is in a scope, see here: p = Post.find 6 +----+----------+-------------------------+-------------------------+-------------------------+-----------+ | id | title | publish_date | created_at | updated_at | published | +----+----------+-------------------------+-------------------------+-------------------------+-----------+ | 6 | asfdfdsa | 2010-03-28 22:33:00 UTC | 2010-03-28 22:33:46 UTC | 2010-03-28 22:33:46 UTC | true | +----+----------+-------------------------+-------------------------+-------------------------+-----------+ I have a menu scope which looks like: scope :menu, where("published != ?", false).limit(4) When I run it I get: Post.menu.all +----+------------------+------------------+------------------+-------------------+-----------+ | id | title | publish_date | created_at | updated_at | published | +----+------------------+------------------+------------------+-------------------+-----------+ | 1 | Lorem ipsum | 2010-03-23 07... | 2010-03-23 07... | 2010-03-28 21:... | true | | 2 | fdasf | 2010-03-28 21... | 2010-03-28 21... | 2010-03-28 21:... | true | | 3 | Ruby’s Imple... | 2010-03-28 21... | 2010-03-28 21... | 2010-03-28 21:... | true | | 4 | dsaD | 2010-03-28 22... | 2010-03-28 22... | 2010-03-28 22:... | true | +----+------------------+------------------+------------------+-------------------+-----------+ Which is correct, but if I try to check if p is in the the menu scope using: Post.menu.exists?(p) I get true when it should be false What is the proper way to find out if a given instance of something is in a scope?

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • Why does std::cout convert volatile pointers to bool?

    - by Joseph Garvin
    If you try to cout a volatile pointer, even a volatile char pointer where you would normally expect cout to print the string, you will instead simply get '1' (assuming the pointer is not null I think). I assume output stream operator<< is template specialized for volatile pointers, but my question is, why? What use case motivates this behavior? Example code: #include <iostream> #include <cstring> int main() { char x[500]; std::strcpy(x, "Hello world"); int y; int *z = &y; std::cout << x << std::endl; std::cout << (char volatile*)x << std::endl; std::cout << z << std::endl; std::cout << (int volatile*)z << std::endl; return 0; } Output: Hello world 1 0x8046b6c 1

    Read the article

  • How do you implement Software Transactional Memory?

    - by Joseph Garvin
    In terms of actual low level atomic instructions and memory fences (I assume they're used), how do you implement STM? The part that's mysterious to me is that given some arbitrary chunk of code, you need a way to go back afterward and determine if the values used in each step were valid. How do you do that, and how do you do it efficiently? This would also seem to suggest that just like any other 'locking' solution you want to keep your critical sections as small as possible (to decrease the probability of a conflict), am I right? Also, can STM simply detect "another thread entered this area while the computation was executing, therefore the computation is invalid" or can it actually detect whether clobbered values were used (and thus by luck sometimes two threads may execute the same critical section simultaneously without need for rollback)?

    Read the article

  • how to create internal frame in netbeans platform?

    - by joseph
    I created class NewProject extends JInternalFrame. Then I create New...Action named "NEW", localised in File menu. I put code NewProject p = new NewProject(); p.setVisible(true); to the ActionPerformed method of the action. But when I run the module and click "NEW" in file menu, nothing appears. Where can be problem?

    Read the article

  • html source encode

    - by Joseph
    when I view source on my php page I get &quot; for a quote. But instead, I would like " to be used in the source code. I have no control over manually replacing it so Im wondering if there is a function to do such a thing.

    Read the article

  • addNotify not works allright

    - by joseph
    Hello. I call addNotify() method in class that I posted here. The problem is, that when I call addNotify() as it is in the code, setKeys(objs) do nothing. Nothing appears in my explorer of running app. But when I call addNotify()without loop(for int....), and add only one item to ArrayList, it shows that one item correctly. Does anybody knows where can be problem? See the cede class ProjectsNode extends Children.Keys{ private ArrayList objs = new ArrayList(); public ProjectsNode() { } @Override protected Node[] createNodes(Object o) { MainProject obj = (MainProject) o; AbstractNode result = new AbstractNode (new DiagramsNode(), Lookups.singleton(obj)); result.setDisplayName (obj.getName()); return new Node[] { result }; } @Override protected void addNotify() { //this loop causes nothing appears in my explorer. //but when I replace this loop by single line "objs.add(new MainProject("project1000"));", it shows that one item in explorer for (int i=0;i==10;i++){ objs.add(new MainProject("project1000")); } setKeys (objs); } }

    Read the article

  • Save simple data in Magento's DB w/o Model

    - by Joseph Mastey
    I'm looking to save some data in the Magento database without hassling with creating a new EAV object (or even a DB table if I can avoid it). Is there any place that you all know about that Magento will let you store serialized data? If it matters, the data is a serialized set of SKUs that I need to retrieve. I know that I could create a new model, or possibly even create an attribute as a flag on each product, but those are both really overkill for my purposes. Thanks, Joe

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • Rails "Load more..." instead of pagination.

    - by Joseph Silvashy
    I have a list of elements, and I've been using will_paginate up until now, but I'd like to have something like "load more..." at the bottom of the list. Is there an easy way to accomplish this using will_paginate or do I need to resort to some other method here? From what I know this is a better route anyhow because then I don't need a SQL count of the records. And it really doesn't matter if there are like 9,847 pages, nobody would need the records beyond the first couple pages anyhow.

    Read the article

  • Transpose a Collection

    - by Joseph Melettukunnel
    Hello, I've a list of different sizes of a T-Shirt, e.g. S, M, L. Since this might change for T-Shirts (sometimes we just have e.g. M, L), we load this into a List sizes. Since most DataGrids (xamDataGrid, WPF Toolkit DataGrid) need Properties for binding to the Columns, I'd like to transpose somehow my data. Does anyone have an idea how to do this? E.g. Instead of having List where Size { string sizeName, int available, int defect, int ordered} Avail. Defect Ordered [S] 1 2 3 [M] 1 2 3 [L] 1 2 3 I want an Object which has the Properties S, M, L containing the Values like this: [S] [M] [L] Avail. 1 2 3 Defect 1 2 3 Ordered 1 2 3 The problem here is that I don't know how many sizes will be available for the tshirt, it might be 3, 4, or 10. Thanks for any help Cheers PS: Here is a mockup of how the final grid should look like http://img39.imageshack.us/img39/9161/multirowspangridfixedel.png

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • PHP get overridden methods from child class

    - by Joseph Mastey
    Given the following case: <?php class ParentClass { public $attrA; public $attrB; public $attrC; public function methodA() {} public function methodB() {} public function methodC() {} } class ChildClass { public $attrB; public function methodA() {} } How can I get a list of methods (and preferably class vars) that are overridden in ChildClass? Thanks, Joe

    Read the article

  • test_case files in rails components

    - by Joseph Misiti
    i noticed there are a bunch of test_case.rb files delivered in the rails components: ./actionmailer-2.3.5/lib/action_mailer/test_case.rb ./actionpack-2.3.5/lib/action_controller/test_case.rb ./actionpack-2.3.5/lib/action_view/test_case.rb ./activerecord-2.3.5/lib/active_record/test_case.rb ./activesupport-2.3.5/lib/active_support/test_case.rb i am wondering how to execute these files. I cant seem to figure out how to do it?

    Read the article

  • WPF Border DesiredHeight

    - by Joseph Sturtevant
    The following Microsoft example code contains the following: <Grid> ... <Border Name="Content" ... > ... </Border> </Grid> <ControlTemplate.Triggers> <Trigger Property="IsExpanded" Value="True"> <Setter TargetName="ContentRow" Property="Height" Value="{Binding ElementName=Content,Path=DesiredHeight}" /> </Trigger> ... </ControlTemplate.Triggers> When run, however, this code generates the following databinding error: System.Windows.Data Error: 39 : BindingExpression path error: 'DesiredHeight' property not found on 'object' ''Border' (Name='Content')'. BindingExpression:Path=DesiredHeight; DataItem='Border' (Name='Content'); target element is 'RowDefinition' (HashCode=2034711); target property is 'Height' (type 'GridLength') Despite this error, the code works correctly. I have looked through the documentation and DesiredHeight does not appear to be a member of Border. Can anyone explain where DesiredHeight is coming from? Also, is there any way to resolve/suppress this error so my program output is clean?

    Read the article

  • Do condition variables still need a mutex if you're changing the checked value atomically?

    - by Joseph Garvin
    Here is the typical way to use a condition variable: // The reader(s) lock(some_mutex); if(protected_by_mutex_var != desired_value) some_condition.wait(some_mutex); unlock(some_mutex); // The writer lock(some_mutex); protected_by_mutex_var = desired_value; unlock(some_mutex); some_condition.notify_all(); But if protected_by_mutex_var is set atomically by say, a compare-and-swap instruction, does the mutex serve any purpose (other than that pthreads and other APIs require you to pass in a mutex)? Is it protecting state used to implement the condition? If not, is it safe then to do this?: // The writer protected_by_mutex_var = desired_value; some_condition.notify_all(); With the writer never directly interacting with the reader's mutex? If so, is it even necessary that different readers use the same mutex?

    Read the article

  • What is the best testing pattern for checking that parameters are being used properly?

    - by Joseph
    I'm using Rhino Mocks to try to verify that when I call a certain method, that the method in turn will properly group items and then call another method. Something like this: //Arrange var bucketsOfFun = new BucketGame(); var balls = new List<IBall> { new Ball { Color = Color.Red }, new Ball { Color = Color.Blue }, new Ball { Color = Color.Yellow }, new Ball { Color = Color.Orange }, new Ball { Color = Color.Orange } }; //Act bucketsOfFun.HaveFunWithBucketsAndBalls(balls); //Assert ??? Here is where the trouble begins for me. My method is doing something like this: public void HaveFunWithBucketsAndBalls(IList<IBall> balls) { //group all the balls together according to color var blueBalls = GetBlueBalls(balls); var redBalls = GetRedBalls(balls); // you get the idea HaveFunWithABucketOfBalls(blueBalls); HaveFunWithABucketOfBalls(redBalls); // etc etc with all the different colors } public void HaveFunWithABucketOfBalls(IList<IBall> colorSpecificBalls) { //doing some stuff here that i don't care about //for the test i'm writing right now } What I want to assert is that each time I call HaveFunWithABucketOfBalls that I'm calling it with a group of 1 red ball, then 1 blue ball, then 1 yellow ball, then 2 orange balls. If I can assert that behavior then I can verify that the method is doing what I want it to do, which is grouping the balls properly. Any ideas of what the best testing pattern for this would be?

    Read the article

  • Get recursive list of svn:ignore'd files

    - by Joseph Mastey
    I have an existing project repo which I use for project A, and has some files and directories excluded from it using svn:ignore. I want to start another project (project B), in a new repo, with approximately the same files ignored in it. How can I get a list of all files in the repo with svn:ignore set on them and the value of that property? I am using Ubuntu, so sed and grep away if that helps. Thanks, Joe

    Read the article

  • Stop an anchor from loading on javascript confirm

    - by Joseph Carrington
    I was under the impression that this was formed correctly, but here it is forwarding to the anchor href (clicking through? what should I call this?) whether or not the user selects cancel or okay. <script type="text/javascript"> function myconfirm(my_string) { var agree = confirm('Are you sure you want to remove ' + my_string + '?'); if(agree) { return true; } else { return false; } } </script> and the anchor <a href="example.com/?remove=yes" onclick="myconfirm('my_string')">My String</a>

    Read the article

  • Returning a complex data type from arguments with Rhino Mocks

    - by Joseph
    I'm trying to set up a stub with Rhino Mocks which returns a value based on what the parameter of the argument that is passed in. Example: //Arrange var car = new Car(); var provider= MockRepository.GenerateStub<IDataProvider>(); provider.Stub( x => x.GetWheelsWithSize(Arg<int>.Is.Anything)) .Return(new List<IWheel> { new Wheel { Size = ?, Make = Make.Michelin }, new Wheel { Size = ?, Make = Make.Firestone } }); car.Provider = provider; //Act car.ReplaceTires(); //Assert that the right tire size was used when replacing the tires The problem is that I want Size to be whatever was passed into the method, because I'm actually asserting later that the wheels are the right size. This is not to prove that the data provider works obviously since I stubbed it, but rather to prove that the correct size was passed in. How can I do this?

    Read the article

  • Rails Deployment: moving static files to S3

    - by Joseph Silvashy
    Someone posted something similar but it didn't really solve the problem. I want to move all my static files (images, javascript, css) to an Amazon S3 bucket when I deploy my app, as well as rewrite those paths in my app, is there a simple way to accomplish this? or am I in for a huge amount of work here?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14  | Next Page >