Search Results

Search found 11529 results on 462 pages for 'rvalue reference'.

Page 371/462 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • jQuery Checkbox Error

    - by Zack Fernandes
    Hello, I am working on a jQuery-based todo list interface, and have hit a bit of a wall. The jQuery I am working with is a bit hacked together from various tutorials I have read, as I'm a bit of a beginner. $('#todo input:checkbox').click(function(){ var id = this.attr("value"); if(!$(this).is(":checked")) { alert("Starting."); $.ajax({ type: "GET", url: "/todos/check/"+id, success: function(){ alert("It worked.") } }); } }) This is the HTML I am using, <div id="todo"> <input type="checkbox" checked="yes" value="1"> Hello, world. <br /> </div> Any help on this would be greatly appreciated. For reference, thereason I have alerts in the jQuery is for debugging. The reason I can tell the code isn't working is because I am not getting these alerts. Thanks.

    Read the article

  • How to create TestContext for Spring Test?

    - by HDave
    Newcomer to Spring here, so pardon me if this is a stupid question. I have a relatively small Java library that implements a few dozen beans (no database or GUI). I have created a Spring Bean configuration file that other Java projects use to inject my beans into their stuff. I am now for the first time trying to use Spring Test to inject some of these beans into my junit test classes (rather than simply instantiating them). I am doing this partly to learn Spring Test and partly to force the tests to use the same bean configuration file I provide for others. In the Spring documentation is says I need to create an application context using the "TestContext" class that comes with Spring. I believe this should be done in a spring XML file that I reference via the @ContextConfiguration annotation on my test class. @ContextConfiguration({"/test-applicationContext.xml"}) However, there is no hint as to what to put in the file! When I go to run my tests from within Eclipse it errors out saying "failed to load Application Context"....of course.

    Read the article

  • WCF with SSL- not finding localhost

    - by SteveCav
    Hi guys, I'm trying to get WCF to use SSL with ANYTHING for FIVE DAYS now. I've gone through countless walkthroughs, generated more certificates than a mail order diploma company, even tried hot fixes. After working with MS dev tools since VB1, I am now considering flipping burgers as a career option. WCF, as far as I can see, is a complete lemon. Anyway, to get to my actual question: If I run through this walkthrough: http://msdn.microsoft.com/en-us/library/ff648840.aspx I get to step 11 (adding the service reference) and get "There was an error downloading metadata from the address. Please verify that you have entered a valid address". Details of the error gives: There was an error downloading 'https://localhost/SSL6/Service.svc'. Unable to connect to the remote server No connection could be made because the target machine actively refused it 127.0.0.1:443 I'm using VS2008 on Windows 7 with IIS7. I followed the walkthrough exactly (apart from step 5 which was different on IIS7- I went into "SSL Settings" for the VD), so it shows my config (yes I've used httpsGetEnabled and mexHttpsBinding). Anyone care to save my sanity and job?

    Read the article

  • Sticky/static variable references in for loops

    - by pthulin
    In this example I create three buttons 'one' 'two' 'three'. When clicked I want them to alert their number: <html> <head> <script type="application/javascript" src="jquery.js"></script> <script type="application/javascript"> $(document).ready(function() { var numbers = ['one', 'two', 'three']; for (i in numbers) { var nr = numbers[i]; var li = $('<li>' + nr + '</li>'); li.click(function() { var newVariable = String(nr); alert(i); // 2 alert(nr); // three alert(newVariable); // three alert(li.html()); // three }); $('ul').append(li); } }); </script> </head> <body> <ul> </ul> </body> </html> The problem is, when any of these are clicked, the last value of the loop's variables is used, i.e. alert box always says 'three'. In JavaScript, variables inside for-loops seem to be 'static' in the C language sense. Is there some way to create separate variables for each click function, i.e. not using the same reference? Thanks!

    Read the article

  • Carriage Return\Line feed in Java

    - by Manu
    Guys, i have created text file in unix enviroment using java code. For writing the text file i am using java.io.FileWriter and BufferedWriter. and for newline after each row i am using bw.newLine() method. (where bw is object of BufferedWriter ) and sending that text file by attaching in mail from unix environment itself (automated that using unix commands). My issue is, after i download the text file from mail in windows system, if i opened that text file the data's are not properly aligned. newline() character is not working i think so. I want same text file alignment as it is in unix environment, if i opened the text file in windows environment also. How to resolve the problem.Please help ASAP. Thanks for your help in advance. pasting my java code here for your reference..(running java code in unix environment) File f = new File(strFileGenLoc); BufferedWriter bw = new BufferedWriter(new FileWriter(f, false)); rs = stmt.executeQuery("select * from jpdata"); while ( rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.newLine(); }

    Read the article

  • How to deserialize implementation classes in OSGi

    - by Daniel Schneller
    In an eRCP OSGi based application the user can push a button and go to a lock screen similar to that of Windows or Mac OS X. When this happens, the current state of the application is serialized to a file and control is handed over to the lock screen. In this mobile application memory is very tight, so we need to get rid of the original view/controller when the lock screen comes up. This works fine and we end up with a binary serialized file. Once the user logs back in, the file is read in again and the original state of the application restored. This works fine as well, except when the controller that was serialized contained a reference to an object which comes from a different bundle. In my concrete case the original controller (from bundle A) can call a web service and gets a result back. Nothing fancy, just some Strings and Numbers in a simple value holder class. However the controller only sees this as a Result interface; the actual runtime object (ResultImpl) is defined and created in a different bundle (bundle B, the webservice client implementation) and returned via a service call. When the deserialization now tries to thaw the controller from the file, it throws a ClassNotFound exception, complaining about not being able to deserialize the result object, because deserialization is called from bundle A, which cannot see the ResultImpl class from bundle B. Any ideas on how to work around that? The only thing I could come up with is to clone all the individual values into another object, defined in the controller's bundle, but this seems like quite a hassle.

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • JPA 2.0 Provider Hibernate

    - by Rooh
    I have very strange problem we are using jpa 2.0 with hibernate annotations based Database generated through JPA DDL is true and MySQL as Database; i will provide some reference classes and then my porblem. @MappedSuperclass public abstract class Common implements serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; @ManyToOne @JoinColumn private Address address; //with all getter and setters //as well equal and hashCode } @Entity public class Parent extends Common{ private String name; @OneToMany(cascade = {CascadeType.MERGE,CascadeType.PERSIST}, mappedBy = "parent") private List<Child> child; //setters and rest of class } @Entity public class Child extends Common{ //some properties with getter/setters } @Entity public class Address implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id", updatable = false) private Long id; private String street; //rest of class with get/setter } as in code you can see that parents and child classes extends Common class so both have address property and id , the problem occurs when change the address refference in parent class it reflect same change in all child objects in list and if change address refference in child class then on merge it will change address refference of parent as well i am not able to figure out is it is problem of jpa or hibernate

    Read the article

  • How to find Tomcat's PID and kill it in python?

    - by 4herpsand7derpsago
    Normally, one shuts down Apache Tomcat by running its shutdown.sh script (or batch file). In some cases, such as when Tomcat's web container is hosting a web app that does some crazy things with multi-threading, running shutdown.sh gracefully shuts down some parts of Tomcat (as I can see more available memory returning to the system), but the Tomcat process keeps running. I'm trying to write a simple Python script that: Calls shutdown.sh Runs ps -aef | grep tomcat to find any process with Tomcat referenced If applicable, kills the process with kill -9 <PID> Here's what I've got so far (as a prototype - I'm brand new to Python BTW): #!/usr/bin/python # Imports import sys import subprocess # Load from imported module. if __init__ == "__main__": main() # Main entry point. def main(): # Shutdown Tomcat shutdownCmd = "sh ${TOMCAT_HOME}/bin/shutdown.sh" subprocess.call([shutdownCmd], shell=true) # Check for PID grepCmd = "ps -aef | grep tomcat" grepResults = subprocess.call([grepCmd], shell=true) if(grepResult.length > 1): # Get PID and kill it. pid = ??? killPidCmd = "kill -9 $pid" subprocess.call([killPidCmd], shell=true) # Exit. sys.exit() I'm struggling with the middle part - with obtaining the grep results, checking to see if their size is greater than 1 (since grep always returns a reference to itself, at least 1 result will always be returned, methinks), and then parsing that returned PID and passing it into the killPidCmd. Thanks in advance!

    Read the article

  • Error when linking C executable to OpenCV

    - by Ghilas BELHADJ
    I'm compiling OpenCV under Ubuntu 13.10 using cMake. i've already compiled c++ programs and they works well. now i'm trying to compile a C file using this cMakeLists.txt cmake_minimum_required (VERSION 2.8) project (hello) find_package (OpenCV REQUIRED) add_executable (hello src/test.c) target_link_libraries (hello ${OpenCV_LIBS}) here is the test.c file: #include <stdio.h> #include <stdlib.h> #include <opencv/highgui.h> int main (int argc, char* argv[]) { IplImage* img = NULL; const char* window_title = "Hello, OpenCV!"; if (argc < 2) { fprintf (stderr, "usage: %s IMAGE\n", argv[0]); return EXIT_FAILURE; } img = cvLoadImage(argv[1], CV_LOAD_IMAGE_UNCHANGED); if (img == NULL) { fprintf (stderr, "couldn't open image file: %s\n", argv[1]); return EXIT_FAILURE; } cvNamedWindow (window_title, CV_WINDOW_AUTOSIZE); cvShowImage (window_title, img); cvWaitKey(0); cvDestroyAllWindows(); cvReleaseImage(&img); return EXIT_SUCCESS; } it returns me this error whene running cmake . then make to the project: Linking C executable hello /usr/bin/ld: CMakeFiles/hello.dir/src/test.c.o: undefined reference to symbol «lrint@@GLIBC_2.1» /lib/i386-linux-gnu/libm.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make[2]: *** [hello] Erreur 1 make[1]: *** [CMakeFiles/hello.dir/all] Erreur 2 make: *** [all] Erreur 2

    Read the article

  • Class initialization and synchronized class method

    - by nybon
    Hi there, In my application, there is a class like below: public class Client { public synchronized static print() { System.out.println("hello"); } static { doSomething(); // which will take some time to complete } } This class will be used in a multi thread environment, many threads may call the Client.print() method simultaneously. I wonder if there is any chance that thread-1 triggers the class initialization, and before the class initialization complete, thread-2 enters into print method and print out the "hello" string? I see this behavior in a production system (64 bit JVM + Windows 2008R2), however, I cannot reproduce this behavior with a simple program in any environments. In Java language spec, section 12.4.1 (http://java.sun.com/docs/books/jls/second_edition/html/execution.doc.html), it says: A class or interface type T will be initialized immediately before the first occurrence of any one of the following: T is a class and an instance of T is created. T is a class and a static method declared by T is invoked. A static field declared by T is assigned. A static field declared by T is used and the reference to the field is not a compile-time constant (§15.28). References to compile-time constants must be resolved at compile time to a copy of the compile-time constant value, so uses of such a field never cause initialization. According to this paragraph, the class initialization will take place before the invocation of the static method, however, it is not clear if the class initialization need to be completed before the invocation of the static method. JVM should mandate the completion of class initialization before entering its static method according to my intuition, and some of my experiment supports my guess. However, I did see the opposite behavior in another environment. Can someone shed me some light on this? Any help is appreciated, thanks.

    Read the article

  • Can a page opt out of IIS 7 compression?

    - by Glen Little
    My pages are automatically being compressed by IIS7 with GZIP. That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user. Is there a way that I can have this one page opt out of the compression? One possible option I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type. This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed: <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> Others options? Are there other options to opt out of compression?

    Read the article

  • Accessing Web.config directly in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve?

    Read the article

  • MVC View Model Intellesense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • Creating a Ruby method that pads an Array

    - by CJ Johnson
    I'm working on creating a method that pads an array, and accepts 1. a desired value and 2. an optional string/integer value. Desired_size reflects the desired number of elements in the array. If a string/integer is passed in as the second value, this value is used to pad the array with extra elements. I understand there is a 'fill' method that can shortcut this - but that would be cheating for the homework I'm doing. The issue: no matter what I do, only the original array is returned. I started here: class Array def pad(desired_size, value = nil) desired_size >= self.length ? return self : (desired_size - self.length).times.do { |x| self << value } end end test_array = [1, 2, 3] test_array.pad(5) From what I researched the issue seemed to be around trying to alter self's array, so I learned about .inject and gave that a whirl: class Array def pad(desired_size, value = nil) if desired_size >= self.length return self else (desired_size - self.length).times.inject { |array, x| array << value } return array end end end test_array = [1, 2, 3] test_array.pad(5) The interwebs tell me the problem might be with any reference to self so I wiped that out altogether: class Array def pad(desired_size, value = nil) array = [] self.each { |x| array << x } if desired_size >= array.length return array else (desired_size - array.length).times.inject { |array, x| array << value } return array end end end test_array = [1, 2, 3] test_array.pad(5) I'm very new to classes and still trying to learn about them. Maybe I'm not even testing them the right way with my test_array? Otherwise, I think the issue is I get the method to recognize the desired_size value that's being passed in. I don't know where to go next. Any advice would be appreciated. Thanks in advance for your time.

    Read the article

  • Cocoa framework development: sharing between projects

    - by e.James
    I am currently developing a handful of similar Cocoa desktop apps. In an effort to share code between them, I have identified a set of core classes and functions that can be common across all of these applications. I would like to bundle this common code into a framework which all of my current applications (and any future ones) can link against. Now, here's the hard part: I'm going to be developing this framework as I go, so I need each of my desktop apps to have a reference to it, but I want to be able to edit the framework source code from within each of the app projects and have the framework automatically rebuilt as required. For example, let's say I have the Xcode project for DesktopAppNumberOne open, and I decide that one of my framework classes needs to be changed. I would like to: Open and edit the source file for that framework class without having to open the framework project in Xcode. Hit "build" on DesktopAppNumberOne, and see the framework rebuilt first (because one of its sources has changed), then see parts of DesktopAppNumberOne rebuilt (because one of the frameworks it links against has changed). I can see how to do this with only one app and one framework, but I'm having trouble figuring out how to do it with multiple apps that share a single framework. Has anyone had success with this approach? Am I perhaps going about this the wrong way? Any help would be appreciated.

    Read the article

  • Closures in Ruby

    - by Isaac Cambron
    I'm kind of new to Ruby and some of the closure logic has me a confused. Consider this code: array = [] for i in (1..5) array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] This makes sense to me because i is bound outside the loop, so the same variable is captured by each trip through the loop. It also makes sense to me that using an each block can fix this: array = [] (1..5).each{|i| array << lambda {i}} array.map{|f| f.call} => [1, 2, 3, 4, 5] ...because i is now being declared separately for each time through. But now I get lost: why can't I also fix it by introducing an intermediate variable? array = [] for i in 1..5 j = i array << lambda {j} end array.map{|f| f.call} => [5, 5, 5, 5, 5] Because j is new each time through the loop, I'd think a different variable would be captured on each pass. For example, this is definitely how C# works, and how -- I think-- Lisp behaves with a let. But in Ruby not so much. It almost looks like = is aliasing the variable instead of copying the reference, but that's just speculation on my part. What's really happening?

    Read the article

  • How might I wrap the FindXFile-style APIs to the STL-style Iterator Pattern in C++?

    - by BillyONeal
    Hello everyone :) I'm working on wrapping up the ugly innards of the FindFirstFile/FindNextFile loop (though my question applies to other similar APIs, such as RegEnumKeyEx or RegEnumValue, etc.) inside iterators that work in a manner similar to the Standard Template Library's istream_iterators. I have two problems here. The first is with the termination condition of most "foreach" style loops. STL style iterators typically use operator!= inside the exit condition of the for, i.e. std::vector<int> test; for(std::vector<int>::iterator it = test.begin(); it != test.end(); it++) { //Do stuff } My problem is I'm unsure how to implement operator!= with such a directory enumeration, because I do not know when the enumeration is complete until I've actually finished with it. I have sort of a hack together solution in place now that enumerates the entire directory at once, where each iterator simply tracks a reference counted vector, but this seems like a kludge which can be done a better way. The second problem I have is that there are multiple pieces of data returned by the FindXFile APIs. For that reason, there's no obvious way to overload operator* as required for iterator semantics. When I overload that item, do I return the file name? The size? The modified date? How might I convey the multiple pieces of data to which such an iterator must refer to later in an ideomatic way? I've tried ripping off the C# style MoveNext design but I'm concerned about not following the standard idioms here. class SomeIterator { public: bool next(); //Advances the iterator and returns true if successful, false if the iterator is at the end. std::wstring fileName() const; //other kinds of data.... }; EDIT: And the caller would look like: SomeIterator x = ??; //Construct somehow while(x.next()) { //Do stuff } Thanks! Billy3

    Read the article

  • Trying to calculate large numbers in Python with gmpy. Python keeps crashing?

    - by Ryan Peschel
    I was recommended to use gmpy to assist with calculating large numbers efficiently. Before I was just using python and my script ran for a day or two and then ran out of memory (not sure how that happened because my program's memory usage should basically be constant throughout.. maybe a memory leak?) Anyways, I keep getting this weird error after running my program for a couple seconds: mp_allocate< 545275904->545275904 > Fatal Python error: mp_allocate failure This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Also, python crashes and Windows 7 gives me the generic python.exe has stopped working dialog. This wasn't happening with using standard python integers. Now that I switch to gmpy I am getting this error just seconds in to running my script. I thought gmpy was specialized in dealing with large number arithmetic? For reference, here is a sample program that produces the error: import gmpy2 p = gmpy2.xmpz(3000000000) s = gmpy2.xmpz(2) M = s**p for x in range(p): s = (s * s) % M I have 10 gigs of RAM and without gmpy this script ran for days without running out of memory (still not sure how that happened considering s never really gets larger.. Anyone have any ideas? EDIT: Forgot to mention I am using Python 3.2

    Read the article

  • Core Data confusion: fetch without tableview.

    - by Mr. McPepperNuts
    I have completed and reproduced Core Data tutorials using a tableview to display contents. However, I want to access an Entity through a fetch on a view without a tableview. I used the following fetch code, but the count returned is always 0. The data exists when the database is opened using SQLite tools. NSManagedObject *entryObj; XYZDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Quote" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"id" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; NSArray *results = [managedObjectContext executeFetchRequest:request error:nil]; if (results == nil) { NSLog(@"No results found"); entryObj = nil; }else { NSLog(@"results %d", [results count]); } [request release]; [sortDescriptors release]; count returned is always 0; it should be 5. Can anyone point me to a reference or tutorial regarding creating a controller not to be used with a tableview.

    Read the article

  • Using jQuery with Windows 8 Metro JavaScript App causes security error

    - by patridge
    Since it sounded like jQuery was an option for Metro JavaScript apps, I was starting to look forward to Windows 8 dev. I installed Visual Studio 2012 Express RC and started a new project (both empty and grid templates have the same problem). I made a local copy of jQuery 1.7.2 and added it as a script reference. <!-- SomeTestApp references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/jquery-1.7.2.js"></script> <script src="/js/default.js"></script> Unfortunately, as soon as I ran the resulting app it tosses out a console error: HTML1701: Unable to add dynamic content ' a' A script attempted to inject dynamic content, or elements previously modified dynamically, that might be unsafe. For example, using the innerHTML property to add script or malformed HTML will generate this exception. Use the toStaticHTML method to filter dynamic content, or explicitly create elements and attributes with a method such as createElement. For more information, see http://go.microsoft.com/fwlink/?LinkID=247104. I slapped a breakpoint in a non-minified version of jQuery and found the offending line: div.innerHTML = " <link/><table></table><a href='/a' style='top:1px;float:left;opacity:.55;'>a</a><input type='checkbox'/>"; Apparently, the security model for Metro apps forbids creating elements this way. This error doesn't cause any immediate issues for the user, but given its location, I am worried it will cause capability-discovery tests in jQuery to fail that shouldn't. I definitely want jQuery $.Deferred for making just about everything easier. I would prefer to be able to use the selector engine and event handling systems, but I would live without them if I had to. How does one get the latest jQuery to play nicely with Metro development?

    Read the article

  • Rtti data manipulation and consistency in Delphi 2010

    - by Coco
    Has anyone an idea, how I can make TValue using a reference to the original data? In my serialization project, I use (as suggested in XML-Serialization) a generic serializer which stores TValues in an internal tree-structure (similar to the MemberMap in the example). This member-tree should also be used to create a dynamic setup form and manipulate the data. My idea was to define a property for the Data: TDataModel <T> = class {...} private FData : TValue; function GetData : T; procedure SetData (Value : T); public property Data : T read GetData write SetData; end; The implementation of the GetData, SetData Methods: procedure TDataModel <T>.SetData (Value : T); begin FData := TValue.From <T> (Value); end; procedure TDataModel <T>.GetData : T; begin Result := FData.AsType <T>; end; Unfortunately, the TValue.From method always makes a copy of the original data. So whenever the application makes changes to the data, the DataModel is not updated and vice versa if I change my DataModel in a dynamic form, the original data is not affected. Sure I could always use the Data property before and after changing anything, but as I use lot of Rtti inside my DataModel, I do not realy want to do this anytime. Perhaps someone has a better suggestion?

    Read the article

  • changing WCF endpoint does not persist data.

    - by Vinay Pandey
    Hi All, I have an application that has reference of a WCF service on machine A, now on certain situation I want tu use similar service hosted on machine B. When I changed the endpoint using following:- EndpointAddress endpoint = new EndpointAddress(new Uri(ConfigurationManager.AppSettings["ServiceURLForMachineB"])); BasicHttpBinding binding = new BasicHttpBinding(); binding.SendTimeout = TimeSpan.FromMinutes(1); binding.OpenTimeout = TimeSpan.FromMinutes(1); binding.CloseTimeout = TimeSpan.FromMinutes(1); binding.ReceiveTimeout = TimeSpan.FromMinutes(10); binding.AllowCookies = false; binding.BypassProxyOnLocal = false; binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; binding.MessageEncoding = WSMessageEncoding.Mtom; binding.TextEncoding = System.Text.Encoding.UTF8; binding.TransferMode = TransferMode.Buffered; binding.UseDefaultWebProxy = true; repositoryService = new WorkflowRepositoryServiceClient(binding, endpoint); When I call login method although method is called from machine B, but username and password in Login(string username,string password) are coming null on machine B. Any Idea what I am doing wrong here?

    Read the article

  • c++ Design pattern for CoW, inherited classes, and variable shared data?

    - by krunk
    I've designed a copy-on-write base class. The class holds the default set of data needed by all children in a shared data model/CoW model. The derived classes also have data that only pertains to them, but should be CoW between other instances of that derived class. I'm looking for a clean way to implement this. If I had a base class FooInterface with shared data FooDataPrivate and a derived object FooDerived. I could create a FooDerivedDataPrivate. The underlying data structure would not effect the exposed getters/setters API, so it's not about how a user interfaces with the objects. I'm just wondering if this is a typical MO for such cases or if there's a better/cleaner way? What peeks my interest, is I see the potential of inheritance between the the private data classes. E.g. FooDerivedDataPrivate : public FooDataPrivate, but I'm not seeing a way to take advantage of that polymorphism in my derived classes. class FooDataPrivate { public: Ref ref; // atomic reference counting object int a; int b; int c; }; class FooInterface { public: // constructors and such // .... // methods are implemented to be copy on write. void setA(int val); void setB(int val); void setC(int val); // copy constructors, destructors, etc. all CoW friendly private: FooDataPrivate *data; }; class FooDerived : public FooInterface { public: FooDerived() : FooInterface() {} private: // need more shared data for FooDerived // this is the ???, how is this best done cleanly? };

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >