Search Results

Search found 14142 results on 566 pages for 'missing symbols'.

Page 473/566 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Mapping relationships from multiple databases in NHibernate

    - by mannish
    I have a multi-database application configured with NHibernate. The entities that correspond to tables from each database are in their own separate assemblies (an assembly per database if you will). I have a need/desire to relate an entity from one database to an entity of another database. Everything up to this point works as I want it to (the application handles multiple session factories, etc.). The relationship I want is many-to-one, but in reality my application only cares about one side of the relationship (for reasons that aren't relevant). The relevant entities are Project and PMProject, where a Project HAS A PMProject. When I map the many-to-one, I get the following error: NHibernate.MappingException: An association from the table PROJECTS refers to an unmapped class: SDMS.PPRM.PMProject The Project mapping itself reads (ignore the funky column naming; it's an Oracle db): <many-to-one name="PMProject" class="SDMS.PPRM.PMProject" column="PM_PROJECT_ID" cascade="none" /> In the class attribute, I'm referencing the appropriate assembly, but I get that error which seems to tell me it simply can't find the mapping file for PMProject. But that file exists (it's set as embedded resource), the session factory instantiation works without fail; so I'm at a loss on how to tell the Project mapping how/where to look for the appropriate mapping. Is there something I'm missing? A better way to go about this? Thanks in advance.

    Read the article

  • Ruby Challenge - efficiently change the last character of every word in a sentence to a capital

    - by emson
    Hi All I recently was challenged to write some Ruby code to change the last character of every word in a sentence into a capital. Such that the string: "script to convert the last letter of every word to a capital" becomes "scripT tO converT thE lasT letteR oF everY worD tO A capitaL" This was my optimal solution however I'm sure you wizards have much better solutions and I would be really interested to hear them. "script to convert the last letter of every word to a capital".split.map{|w|w<<w.slice!(-1).chr.upcase}.join' ' For those interested as to what is going on here is an explanation. split will split the sentence up into an array, the default delimiter is a space and with Ruby you don't need to use brackets here. map the array from split is passed to map which opens a block and process each word (w) in the array. the block slice!(s) off the last character of the word and converts it to a chr (a character not ASCII code) and then capitalises upcase it. This character is now appended << to the word which is missing the sliced last letter. Finally the array of words is now join together with a ' ' to reform the sentence. Enjoy

    Read the article

  • Lock-Free Data Structures in C++ Compare and Swap Routine

    - by slf
    In this paper: Lock-Free Data Structures (pdf) the following "Compare and Swap" fundamental is shown: template <class T> bool CAS(T* addr, T exp, T val) { if (*addr == exp) { *addr = val; return true; } return false; } And then says The entire procedure is atomic But how is that so? Is it not possible that some other actor could change the value of addr between the if and the assignment? In which case, assuming all code is using this CAS fundamental, it would be found the next time something "expected" it to be a certain way, and it wasn't. However, that doesn't change the fact that it could happen, in which case, is it still atomic? What about the other actor returning true, even when it's changes were overwritten by this actor? If that can't possibly happen, then why? I want to believe the author, so what am I missing here? I am thinking it must be obvious. My apologies in advance if this seems trivial.

    Read the article

  • Memory management of objects returned by methods (iOS / Objective-C)

    - by iOSNewb
    I am learning Objective-C and iOS programming through the terrific iTunesU course posted by Stanford (http://www.stanford.edu/class/cs193p/cgi-bin/drupal/) Assignment 2 is to create a calculator with variable buttons. The chain of commands (e.g. 3+x-y) is stored in a NSMutableArray as "anExpression", and then we sub in random values for x and y based on an NSDictionary to get a solution. This part of the assignment is tripping me up: The final two [methods] “convert” anExpression to/from a property list: + (id)propertyListForExpression:(id)anExpression; + (id)expressionForPropertyList:(id)propertyList; You’ll remember from lecture that a property list is just any combination of NSArray, NSDictionary, NSString, NSNumber, etc., so why do we even need this method since anExpression is already a property list? (Since the expressions we build are NSMutableArrays that contain only NSString and NSNumber objects, they are, indeed, already property lists.) Well, because the caller of our API has no idea that anExpression is a property list. That’s an internal implementation detail we have chosen not to expose to callers. Even so, you may think, the implementation of these two methods is easy because anExpression is already a property list so we can just return the argument right back, right? Well, yes and no. The memory management on this one is a bit tricky. We’ll leave it up to you to figure out. Give it your best shot. Obviously, I am missing something with respect to memory management because I don't see why I can't just return the passed arguments right back. Thanks in advance for any answers!

    Read the article

  • PayPal sandbox anomalies

    - by Christian
    When testing some donations on my local machine, I set various key=value pairs to do various things (return to specific thank you page, get POST data from PayPal and not GET data and others) I also built my code around the response from the PayPal sandbox. BUT, when my code goes to the production server and we switch on live payments and test with real accounts and money, a few strange things happen; We get a GET response from PayPal - the URL is filled with crap. We get no transaction details. This is the biggie, no name, no txn_id, no dates, nothing. We get a handful of keys etc, its not totally empty and the payment has gone through, but nowhere near the verbosity of the sandbox. Curious about why this might be? It doesn't really make sense to have a sandbox (or dev environment) that is substantially different from the production environment. Or, am I missing something? EDIT: Still no response to my question in the PayPal Developer Forums. I don't even get a donation amount back from PayPal. Is this a setting maybe? EDIT #2: Two of you have suggested to check PDT and Auto-Return. The data analytics guy for the project only 2 hrs ago suggested the same. I have asked the client to confirm this. I can't see a setting for it in the Sandbox so can assume that it is enabled by default?

    Read the article

  • How do you work around memcached's key/value limitations?

    - by mjy
    Memcached has length limitations for keys (250?) and values (roughtly 1MB), as well as some (to my knowledge) not very well defined character restrictions for keys. What is the best way to work around those in your opinion? I use the Perl API Cache::Memcached. What I do currently is store a special string for the main key's value if the original value was too big ("parts:<number") and in that case, I store <number parts with keys named 1+<main key, 2+<main key etc.. This seems "OK" (but messy) for some cases, not so good for others and it has the intrinsic problem that some of the parts might be missing at any time (so space is wasted for keeping the others and time is wasted reading them). As for the key limitations, one could probably implement hashing and store the full key (to work around collisions) in the value, but I haven't needed to do this yet. Has anyone come up with a more elegant way, or even a Perl API that handles arbitrary data sizes (and key values) transparently? Has anyone hacked the memcached server to support arbitrary keys/values?

    Read the article

  • Cannot use Java 7 instalation if Java 8 is installed

    - by Sebastien Diot
    I normally still use Java 7 for all my coding projects (it's a company "politics" issue), but I installed Java 8 for one third-party project I am contributing to. Now, it seems I cannot have Java 8 installed in Windows 7 x64, and still use Java 7 by default: C:\>"%JAVA_HOME%\bin\java.exe" -version java version "1.7.0_55" Java(TM) SE Runtime Environment (build 1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) C:\>java.exe -version java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) As you can see, JAVA_HOME is completely ignored. I also have Java in the path, using "%JAVA_HOME%\bin", which resolve correctly to Java 7 when I check the path in a DOS box, but it still makes no difference. I checked in the "Java Control Panel" (not sure if this affects the default command-line Java version). Under the "Java" tab, the "View..." button, you get to see "registered" Java versions. I can add all the versions under the "User" tab, but under "System" there is only Java 8, and no way to change it. Am I missing something, or did Oracle just make it impossible to use Java 7, unless I de-install Java 8? I don't want to have to specify the "source" and "target" everywhere, and I don't even know if it is possible for me to specify it everywhere, where Java is used. EDIT: What I did is I de-installed all Java. Then installed the latest Java7 (both 86 and x64), and then the latest Java8 (both 86 and x64). After I did that, I noticed that the x64 JDK was gone. It seems Java8 killed it. So I re-installed the JDK 7 x64, after the JDK 8 x64. Still, JDK7 x64 did not seem to "replace" the "java.exe" which is copied into the "Windows" directory itself (I assume THAT is the problem).

    Read the article

  • Castle Windsor: Reuse resolved component in OnCreate, UsingFactoryMethod or DynamicParameters

    - by shovavnik
    I'm trying to execute an action on a resolved component before it is returned as a dependency to the application. For example, with this graph: public class Foo : IFoo { } public class Bar { IFoo _foo; IBaz _baz; public Bar(IFoo foo, IBaz baz) { _foo = foo; _baz = baz; } } When I create an instance of IFoo, I want the container to instantiate Bar and pass the already-resolved IFoo to it, along with any other dependencies it requires. So when I call: var foo = container.Resolve<IFoo>(); The container should automatically call: container.Resolve<Bar>(); // should pass foo and instantiate IBaz I've tried using OnCreate, DynamicParameters and UsingFactoryMethod, but the problem they all share is that they don't hold an explicit reference to the component: DynamicParameters is called before IFoo is instantiated. OnCreate is called after, but the delegate doesn't pass the instance. UsingFactoryMethod doesn't help because I need to register these components with TService and TComponent. Ideally, I'd like a registration to look something like this: container.Register<IFoo, Foo>((kernel, foo) => kernel.Resolve<Bar>(new { foo })); Note that IFoo and Bar are registered with the transient life style, which means that the already-resolved instance has to be passed to Bar - it can't be "re-resolved". Is this possible? Am I missing something?

    Read the article

  • dynamic LinkButton OnClick event not working on ASP.Net

    - by user1004472
    I want to create dynamic LinkButton for image, <img> tag is not working dynamically so I am using LinkButton with image. I don't want to provide ID to LinkButton because I want to generate more LinkButton dynamically. I am using following code in Default.aspx <%@ Page Language="C#"%> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { Response.Write(@"<asp:LinkButton runat=""server"" OnClick=""btn_click""><img src=""close-icon (1).png"" /></asp:LinkButton>"); } public void btn_click(object sender, EventArgs e) { Response.Write("HELLO"); } </script> </head> <body> <form id="form1" runat="server"> <div> </div> </form> </body> </html> I also tried to write tag code in Default.aspx.cs file but not work. It's showing me following error. Error 1 'ASP.default_aspx' does not contain a definition for 'img_Click' and no extension method 'img_Click' accepting a first argument of type 'ASP.default_aspx' could be found (are you missing a using directive or an assembly reference?) Please help me to solve this problem.

    Read the article

  • FragmentActivity cannot be resolve to a type

    - by Pratik
    I am starting learning for android tablet programming and I am able to updated my Eclipse as well as all new sdk for android. I have start testing application from this blog while at extending the FragmentActivity getting this error I have google it but not getting any solution. My project was created with 3.0 version having 11 level, all other packages are successfully used in the code but only this FragmentActivity was not able to resolve. Am I missing any library or any thing? My code public class Testing_newActivity extends FragmentActivity { // here the FragmentActivity getting error package not found for import /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //setContentView(R.layout.main); if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE) { // If the screen is now in landscape mode, we can show the // dialog in-line so we don't need this activity. finish(); return; } if (savedInstanceState == null) { // During initial setup, plug in the details fragment. DetailsFragment details = new DetailsFragment(); details.setArguments(getIntent().getExtras()); getSupportFragmentManager().beginTransaction().add( android.R.id.content, details).commit(); } } } android manifest file <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.searce.testingnew" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="11" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" > <activity android:label="@string/app_name" android:name=".Testing_newActivity" > <intent-filter > <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest>

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • Is it possible to create an Android Service that listens for hardware key presses?

    - by VoteBrian
    I'd like to run an Android background service that will act as a keylistener from the home screen or when the phone is asleep. Is this possible? From semi-related examples online, I put together the following service, but get the error, "onKeyDown is undefined for the type Service". Does this mean it can't be done without rewriting Launcher, or is there something obvious I'm missing? public class ServiceName extends Service { @Override public void onCreate() { //Stuff } public IBinder onBind(Intent intent) { //Stuff return null; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { if(event.getAction() == KeyEvent.ACTION_DOWN) { switch(keyCode) { case KeyEvent.KEYCODE_A: //Stuff return true; case KeyEvent.KEYCODE_B: //Stuff return true; //etc. } } return super.onKeyDown(keyCode, event); } } I realize Android defaults to the search bar when you type from the home screen, but this really is just for a very particular use. I don't really expect anyone but me to want this. I just think it'd be nice, for example, to use the camera button to wake the phone.

    Read the article

  • Multi-module Maven build

    - by Don
    Hi, My project has a fairly deep graph of Maven modules. The root POM has the following plugin configured <plugins> <plugin> <groupId>org.jvnet</groupId> <artifactId>animal-sniffer</artifactId> <version>1.2</version> <configuration> <signature> <groupId>org.jvnet.animal-sniffer</groupId> <artifactId>java1.4</artifactId> <version>1.0</version> </signature> </configuration> </plugin> </plugins> If I invoke this target from the command line in the root directory via: mvn animal-sniffer:check Then it works fine as long as the current module extends (either directly or indirectly) from the root POM. However there are many children (or grandchildren) of the root module, which do not inherit from that module's POM. In this case, the goal fails because it cannot find the necessary configuration [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] One or more required plugin parameters are invalid/missing for 'animal-sniffer:check' [0] Inside the definition for plugin 'animal-sniffer' specify the following: <configuration> ... <signature>VALUE</signature> </configuration>. When configuring this plugin in the root module, is there any way to exclude a list of sub-modules either by name, or by packaging type? Thanks, Donal

    Read the article

  • F# Higher-order property accessors

    - by Nathan Sanders
    I just upgraded my prototyping tuple to a record. Someday it may become a real class. In the meantime, I want to translate code like this: type Example = int * int let examples = [(1,2); (3,4); (5,6)] let field1s = Seq.map (fst >> printfn "%d") examples to this: type Example = { Field1 : int Field2 : int Description : string } let examples = [{Field1 = 1; Field2 = 2; Description = "foo"} {Field1 = 3; Field2 = 4; Description = "bar"} {Field1 = 5; Field2 = 6; Description = "baz"}] let field1s = Seq.map Description examples The problem is that I expected to get a function Description : Example -> string when I declared the Example record, but I don't. I've poked around a little and tried properties on classes, but that doesn't work either. Am I just missing something in the documentation or will I have to write higher-order accessors manually? (That's the workaround I'm using now.)

    Read the article

  • Finding the Column Index for a Specific Value

    - by Btibert3
    Hi All, I am having a brain cramp. Below is a toy dataset: df <- data.frame( id = 1:6, v1 = c("a", "a", "c", NA, "g", "h"), v2 = c("z", "y", "a", NA, "a", "g"), stringsAsFactors=F) I have a specific value that I want to find across a set of defined columns and I want to identify the position it is located in. The fields I am searching are characters and the trick is that the value I am looking for might not exist. In addition, null strings are also present in the dataset. Assuming I knew how to do this, the variable position indicates the values I would like returned. > df id v1 v2 position 1 1 a z 1 2 2 a y 1 3 3 c a 2 4 4 <NA> <NA> 99 5 5 g a 2 6 6 h g 99 The general rule is that I want to find the position of value "a", and if it is not located or if v1 is missing, then I want 99 returned. In this instance, I am searching across v1 and v2, but in reality, I have 10 different variables. It is also worth noting that the value I am searching for can only exist once across the 10 variables. What is the best way to generate this recode? Many thanks in advance.

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • Issue with Multiple Text Fields and SharedObject Storing

    - by user1662660
    I'm currently working on an AIR for iOS application in Flash CS6. I'm trying to store multiple pieces of data from various text inputs; i.e "name_txt", "number_txt" etc. I have the following code working for a local save file; import flash.events.Event; import flash.desktop.NativeApplication; import flash.events.Event; var n1:String = so.data.Number1; var so:SharedObject = SharedObject.getLocal("TravelPal"); emerg1.text = n1; emerg1.addEventListener(Event.CHANGE, updateEmerg1); function updateEmerg1 (e:Event):void { so.data.Number1 = emerg1.text; so.flush(); } NativeApplication.nativeApplication.addEventListener(Event.EXITING, onExit); function onExit(e:Event):void { so.flush(); } Now as soon as I create multiple text inputs and attempt to store them in my SharedObject, the whole system just falls apart. None of the text gets saved, even the previously working ones. I'm pretty new to ShardObject usage. What am I missing here? Is this a good way to go about storing multiple text inputs?

    Read the article

  • Setting Nullable Integer to String Containing Nothing yields 0

    - by Brian MacKay
    I've been pulling my hair out over some unexpected behavior from nullable integers. If I set an Integer to Nothing, it becomes Nothing as expected. If I set an Integer? to a String that is Nothing, it becomes 0! Of course I get this whether I explicitly cast the String to Integer? or not. I realize I could work around this pretty easily but I want to know what I'm missing. Dim NullString As String = Nothing Dim NullableInt As Integer? = CType(NullString, Integer?) 'Expected NullableInt to be Nothing, but it's 0! NullableInt = Nothing 'This works -- NullableInt now contains Nothing. How is this EDIT: Previously I had my code up here so without the explicit conversion to 'Integer?' and everyone seemed to be fixated on that. I want to be clear that this is not an issue that would have been caught by Option Strict On -- check out the accepted answer. This is a quirk of the string-to-integer conversion rules which predate nullable types, but still impact them.

    Read the article

  • Find the number of congruent triangles?

    - by avd
    Say I have a square from (0,0) to (z,z). Given a triangle within this square which has integer coordinates for all its vertices. Find out the number of triangles within this square which are congruent to this triangle and have integer coordinates. My algorithm is as follows-- 1) Find out the minimum bounding rectangle(MBR) for the given triangle. 2) Find out the number of congruent triangles, y within that MBR, obtained after reflection, rotation of the given triangle. y can be either 2,4 or 8. 3) Now find out how many such MBR's can be drawn within the given big square, say x; (This is similar to finding number of squares on a chess board) 4) x*y is the required answer. Am I counting some triangles more than once or I am missing something by this algorithm? It is a problem on online judge? It gives me wrong answer. I have thought a lot about it, but I am not able to figure it out.

    Read the article

  • Orientation issue while presenting Modal ViewController

    - by Jacky Boy
    Current scenario: Right now I am showing a UIViewController using a segue with the style Modal and presentation Sheet. This Modal gets its superview bounds change, in order to have the dimensions I want, like this: - (void)viewWillLayoutSubviews { [super viewWillLayoutSubviews]; self.view.superview.bounds = WHBoundsRect; } The only allowed orientations are UIInterfaceOrientationLandscapeLeft and UIInterfaceOrientationLandscapeRight. Since the Modal has some TextFields and the keyboard would be over the Modal itself, I am changing its center so it moves a bit to the top. The problem: What I am noticing right now, is that I am unable to work with the Y coordinate. In order for it move vertically (remember it's on landscape) I need to work with the X. The problem is that when it's UIInterfaceOrientationLandscapeLeft I need to come with a negative X. And when it's UIInterfaceOrientationLandscapeRight I need to come with a positive X. So it seems that the X/Y Coordinate System is "glued" to the top left corner while in Portrait and when an orientation occurs, it's still there: What I have done So I have something like this: UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation]; NSInteger newX = 0.0f; if (orientation == UIInterfaceOrientationLandscapeLeft) { // Logic for calculating the negative X. } else { // Logic for calculating the positive X. } It works exactly like I want, but it seems a very fragile implementation. Am I missing something? Is this the expected behaviour?

    Read the article

  • NSProgressIndicator woes - perhaps my NSView subclass?

    - by mootymoots
    Hi All I have created a basic application, created a subclassed NSView, and added it as a custom view in interface builder. All works ok. However certain things do not work correctly, which makes me wonder if my NSView is subclassed correctly? Specifically, when using an NSProgressIndicator, I can use startAnimating: and stopAnimating on an indeterminate, but if I try and do anything with a determinate with incrementBy it does nothing. Even if I set the default value of the determinate NSProgressIndicator to 50.0, it appears when the app is launched at 0.0, despite looking good in IB. My NSProgressIndicator is hooked up correctly as an IBOutlet, I can tell it to hide etc, just can't get it to animate at all. However, I also have other issues that make me think that this problem is actually my NSView subclass (such as Quick Look not firing). In my subclass I've simply overridden the initWithFrame: and drawRect methods, calling their [super]. As I said, I've then placed this as a custom view in interface builder and changed it to MyCustomView. All works fine mostly...? Am I subclassing this incorrectly, or not doing something in interface builder correctly? I seem to be missing some small thing?!

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • Ordering by Sum from a separate controller part 2

    - by bgadoci
    Ok, so I had this working just fine before making a few controller additions and the relocation of some code. I feel like I am missing something really simple here but have spent hours trying to figure out what is going on. Here is the situation. class Question < ActiveRecord::Base has_many :sites end and class Sites < ActiveRecord::Base belongs_to :questions end I am trying to display my Sites in order of the sum of the 'like' column in the Sites Table. From my previous StackOverflow question I had this working when the partial was being called in the /views/sites/index.html.erb file. I then moved the partial to being called in the /views/questions/show.html.erb file and it successfully displays the Sites but fails to order them as it did when being called from the Sites view. I am calling the partial from the /views/questions/show.html.erb file as follows: <%= render :partial => @question.sites %> and here is the SitesController#index code class SitesController < ApplicationController def index @sites = @question.sites.all(:select => "sites.*, SUM(likes.like) as like_total", :joins => "LEFT JOIN likes AS likes ON likes.site_id = sites.id", :group => "sites.id", :order => "like_total DESC") respond_to do |format| format.html # index.html.erb format.xml { render :xml => @sites } end end

    Read the article

  • Android XML Preference issue. Can't make it persistent

    - by Budius
    I have a very simple activity just to show the preference fragment: public class PreferencesActivity extends Activity { Fragment frag = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); FragmentTransaction ft = getFragmentManager().beginTransaction(); if (frag == null) { // If not, instantiate and add it to the activity frag = new PrefsFragment(); ft.add(android.R.id.content, frag, frag.getClass().getName()); } else { // If it exists, simply attach it in order to show it ft.attach(frag); } ft.commit(); } private static class PrefsFragment extends PreferenceFragment { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); } } } and preferences.xml with persistent to true: <?xml version="1.0" encoding="utf-8"?> <PreferenceScreen xmlns:android="http://schemas.android.com/apk/res/android" android:enabled="true" android:persistent="true" android:title="@string/settings" > <EditTextPreference android:dialogTitle="@string/dialog_ip" android:negativeButtonText="@android:string/cancel" android:persistent="true" android:positiveButtonText="@android:string/ok" android:title="@string/ip" /> </PreferenceScreen> if I open the EditTextPreference, write something, close the dialog and open it again. The value is still there. But that's it... if I click the Back button, and enter the again on the preferences screen, I already lost what was written. If you exit the application also doesn't save. Am I missing something here? Running on: Android 4.0.3 Asus TF300

    Read the article

  • Solved: Help with this compile error

    - by Scott
    I just picked up an old project and I'm not sure what the following error could mean. g++ -o BufferedReader.o -c -g -Wall -std=c++0x -I/usr/include/xmms2 -Ijsoncpp/include/json/ -fopenmp -I/usr/include/ImageMagick -I/usr/include/xmms2 -I/usr/include/libvisual-0.4 -D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL -DQT_CORE_LIB -DQT_GUI_LIB -DQT_SCRIPT_LIB -DQT_SHARED -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include/QtScript BufferedReader.cpp In file included from BufferedReader.cpp:23: /usr/include/string.h:36:42: error: missing binary operator before token "(" In file included from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/cwchar:47, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/bits/postypes.h:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/iosfwd:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/ios:39, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/istream:40, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/sstream:39, from BufferedReader.cpp:24: At line 24 of BufferedReader.cpp is #include <string.h>. I've tried it with just <string> but get the same thing. Any clue? Here's the snippet of code from string.h /* Tell the caller that we provide correct C++ prototypes. */ #if defined __cplusplus && __GNUC_PREREQ (4, 4) //line 36 # define __CORRECT_ISO_CPP_STRING_H_PROTO #endif Does that mean __GNUC_PREREQ isn't defined? Edit: Changing -Ijsoncpp/include/json/ to Ijsoncpp/include stopped the errors. I noticed I was including <json/json.h>. I'm about to switch to JsonGlib though, which is the reason I pulled the project up again. So it's all good. :)

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >