Search Results

Search found 6745 results on 270 pages for 'objective c'.

Page 264/270 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Static Analyzer says I have a leak....why?

    - by Walter
    I think this code should be fine but Static Analyzer doesn't like it. I can't figure out why and was hoping that someone could help me understand. The code works fine, the analyzer result just bugs me. Coin *tempCoin = [[Coin alloc] initalize]; self.myCoin = tempCoin; [tempCoin release]; Coin is a generic NSObject and it has an initalize method. myCoin is a property of the current view and is of type Coin. I assume it is telling me I am leaking tempCoin. In my view's .h I have set myCoin as a property with nonatomic,retain. I've tried to autorelease the code as well as this normal release but Static Analyzer continues to say: 1. Method returns an Objective-C object with a +1 retain count (owning reference) 2. Object allocated on line 97 is no longer referenced after this point and has a retain count of +1 (object leaked) Line 97 is the first line that I show.

    Read the article

  • Determining where in the code an error came from - iPhone

    - by Robert Eisinger
    I'm used to Java programming where an error is thrown and it tells you at what line the error was thrown from which file. But with Objective-C in XCode, I can't ever tell where the error comes from. How can I figure out where the error came from? Here is an example of a crash error: 2011-01-04 10:36:31.645 TestGA[69958:207] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSMutableArray objectAtIndex:]: index 0 beyond bounds for empty array' *** Call stack at first throw: ( 0 CoreFoundation 0x01121be9 __exceptionPreprocess + 185 1 libobjc.A.dylib 0x012765c2 objc_exception_throw + 47 2 CoreFoundation 0x011176e5 -[__NSArrayM objectAtIndex:] + 261 3 TestGA 0x000548d8 -[S7GraphView drawRect:] + 5763 4 UIKit 0x003e16eb -[UIView(CALayerDelegate) drawLayer:inContext:] + 426 5 QuartzCore 0x00ec89e9 -[CALayer drawInContext:] + 143 6 QuartzCore 0x00ec85ef _ZL16backing_callbackP9CGContextPv + 85 7 QuartzCore 0x00ec7dea CABackingStoreUpdate + 2246 8 QuartzCore 0x00ec7134 -[CALayer _display] + 1085 9 QuartzCore 0x00ec6be4 CALayerDisplayIfNeeded + 231 10 QuartzCore 0x00eb938b _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 325 11 QuartzCore 0x00eb90d0 _ZN2CA11Transaction6commitEv + 292 12 QuartzCore 0x00ee97d5 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 99 13 CoreFoundation 0x01102fbb __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 27 14 CoreFoundation 0x010980e7 __CFRunLoopDoObservers + 295 15 CoreFoundation 0x01060bd7 __CFRunLoopRun + 1575 16 CoreFoundation 0x01060240 CFRunLoopRunSpecific + 208 17 CoreFoundation 0x01060161 CFRunLoopRunInMode + 97 18 GraphicsServices 0x01932268 GSEventRunModal + 217 19 GraphicsServices 0x0193232d GSEventRun + 115 20 UIKit 0x003b842e UIApplicationMain + 1160 21 TestGA 0x00001cd8 main + 102 22 TestGA 0x00001c69 start + 53 23 ??? 0x00000001 0x0 + 1 So from looking at this, where is the error coming from and from which class is it coming from?

    Read the article

  • Cross-platform HTML application options

    - by Charles
    I'd like to develop a stand-alone desktop application targeting Windows (XP through 7) and Mac (Tiger through Snow Leopard), and if possible iPhone and Android. In order to make it all work with as much common code as possible (and because it's the only thing I'm good at), I'd like to handle the main logic with HTML and JS. Using Adobe AIR is a possibility. And I think I can do this with various application wrappers, using .NET for Windows XP, Objective C for iPhone, Java for Android and native "widget" platform support for Mac and Windows Vista & 7 (though I'd like to keep the widget in the foreground, so the Mac dashboard isn't ideal). Does anyone have any suggestions on where to start? The two sticking points are: I'll certainly need some form of persistent storage (cookies perhaps) to keep state between sessions I'll also probably need access to remote data files, so if I use AJAX and the hosting HTML file resides on the device, it will need to be able to do cross-domain requests. I've done this on the iPhone without any problems, but I'd be surprised if this were possible on other platforms. For me, Android and iPhone will be the easiest to handle, and it looks like I can use Adobe AIR to handle the rest. But I wanted to know if there are any other alternatives. Does anyone have any suggesions?

    Read the article

  • Exited event of Process is not rised?

    - by Kanags.Net
    In my appliation,I am opening an excel sheet to show one of my Excel documents to the user.But before showing the excel I am saving it to a folder in my local machine which in fact will be used for showwing. While the user closes the application I wish to close the opened excel files and delete all the excel files which are present in my local folder.For this, in the logout event I have written code to close all the opened files like shown below, Process[] processes = Process.GetProcessesByName(fileType); foreach (Process p in processes) { IntPtr pFoundWindow = p.MainWindowHandle; if (p.MainWindowTitle.Contains(documentName)) { p.CloseMainWindow(); p.Exited += new EventHandler(p_Exited); } } And in the process exited event I wish to delete the excel file whose process is been exited like shown below void p_Exited(object sender, EventArgs e) { string file = strOriginalPath; if (File.Exists(file)) { //Pdf issue fix FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read); fs.Flush(); fs.Close(); fs.Dispose(); File.Delete(file); } } But the problem is this exited event is not called at all.On the other hand if I delete the file after closing the MainWindow of the process I am getting an exception "File already used by another process". Could any help me on how to achieve my objective or give me an reason why the process exited event is not being called?

    Read the article

  • Capacity Allocation

    - by user1708730
    I am new to VB in Excel. I have a unique requirement for capacity allocation which I want to automate using excel VB and facing hard time doing so, hope you can help. The objective is to maximize profit by allocating maximum capacity to those products which have highest profit potential first. Every Month I get demand along with backlogs of previous month. I need to allocate capacity to backlogs of previous month first and then only the remaining capacity for fresh demand. There are two primary constraints: 1.The number of working days in a month (variable) 2. Not all products can be made on every production line and out of same product may be different for each production line Also there will be losses whenever there is a change over from one SKU to another depending upon the Variant Type and size of next product. If there is variant change then 8 hours of production loss needs to be accounted and 4 hours in case of size change(8 hours in case of both). I have attached sample data(Actual data has 10 production lines and 50 products) https://rapidshare.com/files/1822719405/Sample%20Data.xlsx?bin=1 Thanks in advance for help!

    Read the article

  • Porting Perl to C++ `print "\x{2501}" x 12;`

    - by jippie
    I am porting a program from Perl to C++ as a learning objective. I arrived at a routine that draws a table with commands like the following: Perl: print "\x{2501}" x 12; And it draws 12 times a '?' ("box drawings heavy horizontal"). Now I figured out part of the problem already: Perl: \x{}, \x00 Hexadecimal escape sequence; C++: \unnnn To print a single Unicode character: C++: printf( "\u250f\n" ); But does C++ have a smart equivalent for the 'x' operator or would it come down to a for loop? UPDATE Let me include the full source code I am trying to compile with the proposed solution. The compiler does throw an errors: g++ -Wall -Werror project.cpp -o project project.cpp: In function ‘int main(int, char**)’: project.cpp:38:3: error: ‘string’ is not a member of ‘std’ project.cpp:38:15: error: expected ‘;’ before ‘s’ project.cpp:39:3: error: ‘cout’ is not a member of ‘std’ project.cpp:39:16: error: ‘s’ was not declared in this scope #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <string.h> int main ( int argc, char *argv[] ) { if ( argc != 2 ) { fprintf( stderr , "usage: %s matrix\n", argv[0] ); exit( 2 ); } else { //std::string s(12, "\u250f" ); std::string s(12, "u" ); std::cout << s; } }

    Read the article

  • Financial Market Developer dilemma...

    - by Sahat
    ...In the future I am planning to work in the financial sector as a programmer. I have a couple of options right now (1 or 2): Learn and master .NET since presumably that's widely used in that industry OR Learn the programming concepts, learn algorithms, learn a little bit of c,c++,c#,java,objective-c,sql,oracle,cobol - in other words learn the fundamental principles that tie all programming languages together without going too deep in any particular language. Someone has told me that most of the time as a programmer you won't be writing any code, but instead maintaing and existing code that people before you have built. Does that mean I don't really need to master any specific language and as long as I have general concepts it'll be good enough? If you or if you know someone who has worked in the financial industry as a software developer could you please share the experience and what is the daily routine consists of? Also what should I be learning right now while I am still young and in college? Do I have to thoroughly understand the market and the current economy? What about Oracle or SQL Databases - do I need to know them inside out as a programmer? Thanks if you have anything else to add that I have not mentioned then please do so! Thanks in advance!

    Read the article

  • How do I slow down a flash game?

    - by dsaccount1
    Basically the objective is to click on certain targets, which upon doing so would destroy the target and garner you points. I've written a macro to help me until the point where its impossible to even see the target more than a mere flicker, (maybe even less than that, i cant see it with my eyes). But its possible because i believe others have done so. (Maybe on slower comps?) Anyway the question is, how would it be possible to slow down the flash game? I've thought of a couple of ways that could work but i'm not sure how to implement them. 1. Slow down the cpu speed? (smth like that? how?) 2. As the game progress the time the targets appear and stay up is reduced. Maybe theres a variable controlling all of this, isit possible to modify the address of this variable? freeze it or smth? Any ideas, suggestions and especially advice would really be appreciated, thanks!

    Read the article

  • MySQL Query Where Column Like Column

    - by shmeeps
    I'm working on a small project that involves grabbing a list of contacts which are stored for each group. Essentially, the database is set up so that each group has a primary and secondary contact stored as, unsurprisingly, Group.Primary and Group.Secondary. The objective is to pull every Primary and Secondary contact for each Group and display them in a sortable table. I have the sortable table all worked out, but I have come across a small problem. Each primary and secondary field can have more than one contact separated by a comma. For instance, if Primary contained 123,256 , it would need to pull both Contacts with IDs 123 and 256. I had intended to use a query formatted like this: SELECT * FROM Group G, Contacts C WHERE G.Primary LIKE %C.ID% OR G.Secondary LIKE %C.ID% so that I could just skip the comma part, but I can't seem to find a working query for this. My question to you is, am I just overlooking something here? Is there a simple query that would let me do this? Or am I better off getting the groups and contacts separately, and combine the two later. I think the former is a little easier to understand when read, which is a plus as this is a shared project, but if that is not possible I will do the latter. This code is simplified, but it gets the point across.

    Read the article

  • Suitable web framework for the following scenario

    - by Paralife
    I have the following scenario: I have a view in an Oracle server and all Iwant is to show that view in a web browser, along with an input field or two for basic filtering. No users, no authentication, just this view maybe with a column or two linking to a second page for master detail viewing. The children are just string descriptions of the columns of the master that contain IDs. No inserts or updates. The question is which is the JAVA based web framework of choice that can accomplish the above in the minimum amount of code lines code time(subjective but also kind of objective if someone has expirience with more than one or two frameworks) configuration effort deployment effort and requirements. dependencies and mem footprint Also: 6. Oracle APEX is not an option. 3,4 and 5 are maybe the same in the sense that they are everything except the functionality coding. I want something that I can compile, deploy by just FTPing to the database host, run and forget. (e.g. For the deployment aspect, Hudson way comes in mind (java -jar hudson.war and that's all)). Also: 3,4 have priority over 1 and 2. (Explanation with a rant: I dont mind coding a lot as long as it is application code and not "why the fuck do we still use javascript over http for everything" code) Thanks.

    Read the article

  • How to know the type of an object in a list?

    - by nacho4d
    Hi, I want to know the type of object (or type) I have in my list so I wrote this: void **list; //list of references list = new void * [2]; Foo foo = Foo(); const char *not_table [] = {"tf", "ft", 0 }; list[0] = &foo; list[1] = not_table; if (dynamic_cast<LogicProcessor*>(list[0])) { //ERROR here ;( printf("Foo was found\n"); } if (dynamic_cast<char*> (list[0])) { //ERROR here ;( printf("char was found\n"); } but I get : error: cannot dynamic_cast '* list' (of type 'void*') to type 'class Foo*' (source is not a pointer to class) error: cannot dynamic_cast '* list' (of type 'void*') to type 'char*' (target is not pointer or reference to class) Why is this? what I am doing wrong here? Is dynamic_cast what I should use here? Thanks in advance EDIT: I know above code is much like plain C and surely sucks from the C++ point of view but is just I have the following situation and I was trying something before really implementing it: I have two arrays of length n but both arrays will never have an object at the same index. Hence, or I have array1[i]!=NULL or array2[i]!=NULL. This is obviously a waste of memory so I thought everything would be solved if I could have both kind of objects in a single array of length n. I am looking something like Cocoa's (Objective-C) NSArray where you don't care about the type of the object to be put in. Not knowing the type of the object is not a problem since you can use other method to get the class of a certain later. Is there something like it in c++ (preferably not third party C++ libraries) ? Thanks in advance ;)

    Read the article

  • Can somebody please explain this recursive function for me?

    - by capncoolio
    #include <stdio.h> #include <stdlib.h> void reprint(char *a[]) { if(*a) { printf("%d ",a); reprint(a+1); printf("%s ",*a); } } int main() { char *coll[] = {"C", "Objective", "like", "don't", "I", NULL}; reprint(coll); printf("\n"); return EXIT_SUCCESS; } As the more experienced will know, this prints the array in reverse. I don't quite understand how! I need help understanding what reprint(char *a[]) does. I understand pointer arithmetic to a degree, but from inserting printf's here and there, I've determined that the function increments up to the array end, and then back down to the start, only printing on the way down. However, I do not understand how it does this; all I've managed to understand by looking at the actual code is that if *a isn't NULL, then call reprint again, at the next index. Thanks guys!

    Read the article

  • What is the best way to properly test object equality against an array of objects?

    - by radesix
    My objective is to abort the NSXMLParser when I parse an item that already exists in cache. The basic flow of the program works like this: 1) Program starts and downloads an XML feed. Each item in the feed is represented by a custom object (FeedItem). Each FeedItem gets added to an array. 2) When the parsing is complete the contents of the array (all FeedItem objects) are archived to the disk. The next time the program is executed or the feed is refreshed by the user I begin parsing again; however, since a cache (array) now exists as each item is parsed I want to see if the object exists in the cache. If it does then I know I have downloaded all the new items and no longer need to continue parsing. What I am learning, I think, is that I can't use indexOfObject or indexOfObjectIDenticalTo: because these really seem to be checking to see that the objects are using the same memory address (thus identical). What I want to do is see if the contents of the object are equal (or at least some of the contents). I've done some research and found that I can override the IsEqual method; however, I really don't want to iterate/enumerate through the entire cache contents table for every newly parsed XML FeedItem. Is iterating through the collection and testing each one for equality the only way to do this or is there a better technique I am not aware of? Currently I am using the following code though I know it needs to change: NSUInteger index = [self.feedListCache.feedList indexOfObject:self.currentFeedItem]; if (index == NSNotFound) { }

    Read the article

  • How to emulate a BEFORE DELETE trigger in SQL Server 2005

    - by Mark
    Let's say I have three tables, [ONE], [ONE_TWO], and [TWO]. [ONE_TWO] is a many-to-many join table with only [ONE_ID and [TWO_ID] columns. There are foreign keys set up to link [ONE] to [ONE_TWO] and [TWO] to [ONE_TWO]. The FKs use the ON DELETE CASCADE option so that if either a [ONE] or [TWO] record is deleted, the associated [ONE_TWO] records will be automatically deleted as well. I want to have a trigger on the [TWO] table such that when a [TWO] record is deleted, it executes a stored procedure that takes a [ONE_ID] as a parameter, passing the [ONE_ID] values that were linked to the [TWO_ID] before the delete occurred: DECLARE @Statement NVARCHAR(max) SET @Statement = '' SELECT @Statement = @Statement + N'EXEC [MyProc] ''' + CAST([one_two].[one_id] AS VARCHAR(36)) + '''; ' FROM deleted JOIN [one_two] ON deleted.[two_id] = [one_two].[two_id] EXEC (@Statement) Clearly, I need a BEFORE DELETE trigger, but there is no such thing in SQL Server 2005. I can't use an INSTEAD OF trigger because of the cascading FK. I get the impression that if I use a FOR DELETE trigger, when I join [deleted] to [ONE_TWO] to find the list of [ONE_ID] values, the FK cascade will have already deleted the associated [ONE_TWO] records so I will never find any [ONE_ID] values. Is this true? If so, how can I achieve my objective? I'm thinking that I'd need to change the FK joining [TWO] to [ONE_TWO] to not use cascades and to do the delete from [ONE_TWO] manually in the trigger just before I manually delete the [TWO] records. But I'd rather not go through all that if there is a simpler way.

    Read the article

  • Sqlite3 INSERT INTO Question × 377

    - by user316717
    Hi, My 1st post. I am creating an exercise app that will record the weight used and the number of "reps" the user did in 4 "Sets" per day over a period of 7 days so the user may view their progress. I have built the database table named FIELDS with 2 columns ROW and FIELD_DATA and I can use the code below to load the data into the db. But the code has a sql statement that says, INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); When I change the statment to: INSERT INTO FIELDS (ROW, FIELD_DATA)VALUES (%d, '%@'); Nothing happens. That is no data is recorded in the db. Below is the code: #define kFilname @"StData.sqlite3" - (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kFilname]; } -(IBAction)saveData:(id)sender; { for (int i = 1; i <= 8; i++) { NSString *fieldName = [[NSString alloc]initWithFormat:@"field%d", i]; UITextField *field = [self valueForKey:fieldName]; [fieldName release]; NSString *insert = [[NSString alloc] initWithFormat: @"INSERT OR REPLACE INTO FIELDS (ROW, FIELD_DATA) VALUES (%d, '%@');",i, field.text]; // sqlite3_stmt *stmt; char *errorMsg; if (sqlite3_exec (database, [insert UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { // NSAssert1(0, @"Error updating table: %s", errorMsg); sqlite3_free(errorMsg); } } sqlite3_close(database); } So how do I modify the code to do what I want? It seemed like a simple sql statement change at first but obviously there must be more. I am new to Objective-C and iPhone programming. I am not new to using sql statements as I have been creating web apps in ASP for a number of years. Any help will be greatly appreciated, this is driving me nuts! Thanks in advance Dave

    Read the article

  • Android - How to decide wether to run a Service in a separate Process?

    - by pableu
    I am working on an Android application that collects sensor data over the course of multiple hours. For that, we have a Service that collects the Sensor Data (e.g. Acceleration, GPS, ..), does some processing and stores them remotely on a server. Currently, this Service runs in a separate process (using android:service=":background" in the manifest). This complicates the communication between the Activities and the Service, but my predecessors created the Application this way because they thought that separating the Service from the Activities would make it more stable. I would like some more factual reasons for the effort of running a separate process. What are the advantages? Does it really run more stable? Is the Service less likely to be killed by the OS (to free up resources) if it's in a separate process? Our Application uses startForeground() and friends to minimize the chance of getting killed by the OS. The Android docs are not very specific about this, the mostly state that it depends on the Application's purpose ;-) TL;DR What are objective reasons to put a long-running Service in a separate process (in Android)?

    Read the article

  • Associate two sets of values

    - by PJW
    I have the following code - public static int GetViewLevel(string viewLevelDesc) { try { switch (viewLevelDesc) { case "All": return 0; case "Office": return 10; case "Manager": return 50; default: throw new Exception("Invalid View Level Description"); } } catch (Exception eX) { throw new Exception("Action: GetViewLevel()" + Environment.NewLine + eX.Message); } } public static string GetViewLevelDescription(int viewLevel) { try { switch (viewLevel) { case 0: return "All"; case 10: return "Office"; case 50: return "Manager"; default: throw new Exception("Invalid View Level Description"); } } catch (Exception eX) { throw new Exception("Action: GetViewLevelDescription()" + Environment.NewLine + eX.Message); } } The two static Methods enable me to either get an int ViewLevel from a string ViewLevelDesc or vice versa. I'm sure the way I have done this is far more cumbersome than it needs to be, and I'm looking for some advice how to achieve the same objective but more concisely. The list of int / string pairs will increase significantly. The ones in the above code are just the first three I intend to use.

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • Do glue records in non-circular dns-lookups speed up domain resolution or not?

    - by Joe Hopfgartner
    Doing a lookup for my domain on http://www.intodns.com/ I noticed theese two messages: In Parent section: DNS Parent sent Glue The parent nameserver g.gtld-servers.net is not sending out GLUE for every nameservers listed, meaning he is sending out your nameservers host names without sending the A records of those nameservers. It's ok but you have to know that this will require an extra A lookup that can delay a little the connections to your site. This happens a lot if you have nameservers on different TLD (domain.com for example with nameserver ns.domain.org.) and in NS section: Glue for NS records INFO: GLUE was not sent when I asked your nameservers for your NS records.This is ok but you should know that in this case an extra A record lookup is required in order to get the IPs of your NS records. The nameservers without glue are: 109.230.225.96 84.201.40.52 You can fix this for example by adding A records to your nameservers for the zones listed above. I do perfectly understand that the primary objective of glue records is to resolve circular dependencies. The classic use case: my domain is example.com and I want to have the nameserver ns1.example.com. This will never work because i cannot know the ip of ns1.example.com if I don't fetch example.com and in order to do that I need to fetch it from ns1.example.com. To resolve this deadlock I add a glue record to ns1.example.com containing the ip adress of the nameserver, so this can work out. So this problem does not occour if the nameservers are in a different TLD than the domain i want to look up. But however to fetch the zone information from the nameservers I need to know their ip adress right? And in order to know that i need to fetch the zone the nameservers are in from their respective nameservers, right? (or rather my ISP needs to do that in the background) So an extra lookup that takes time? If I now have glue records, I know the IP adress right away without the need to look it up - so this should speed up the resolution of my domain, shouldnt it? However my DNS zone provider (tecserver.at) replied that this would make no sense because "we are not running ns1.ourdomain.com an ns1.ourdomain.com as authorative NS for ourdomain.com. This would be the only sense for glue records. Tecserver has a glue record because the NS for tecserver.at are ns1.tecserver.at and ns2.tecserver.at. Therefore a glue record is needed for resolution.

    Read the article

  • How to remove RAID flag on unstriped drive without losing data?

    - by Alex Folland
    I have a Gigabyte Z68X-UD4-B3 motherboard. It advertises this new thing called "XHD", which is like RAID but makes a SSD and traditional-style drive work together to enable high speed with high capacity. I don't want to use this feature, and I already have Windows 7 64 installed without using this feature. When I first installed my 2 hard drives (1 SSD and 1 traditional-style drive) in my machine and booted it up for the first time, it ran a program from the mobo that asked me if I wanted to set up XHD. Thinking it would go to some config screen, I said yes. It immediately started doing something with my drives and finished. I considered that strange, but figured it wouldn't matter when I simply install Windows onto my SSD only. I now have my BIOS and Windows running in AHCI mode with no RAID arrays and separate drives. My SSD is one of those new Corsair Force GT drives which loses power every so often, causing Windows to BSOD. I've figured everything out about this problem, including installing the latest firmware from Corsair, and the only way to fix it at this point is by installing Intel Rapid Storage Technology to control AHCI instead of Windows, since the Windows AHCI driver disables the drive's power every once in a while and can't be configured not to do so. I've tried installing Intel Rapid Storage Technology. When I reboot my machine after doing so, it BSODs just after the Windows logo. I've figured out this is because my SSD and my traditional drive are flagged as RAID, as seen in the "Intel Matrix Storage Manager" program found by switching the BIOS hard drive handling to "RAID" mode. This is due to the XHD auto-config program I mentioned earlier. Normally, the BIOS is set to AHCI, and when the drives boot in AHCI mode, they work perfectly. So, I've concluded the data is stored in AHCI mode but the drives' flags are set to RAID. I've figured out that I can accomplish my objective by using the "Intel Matrix Storage Manager" program on the mobo (with "Reset disks to non-RAID"), but doing so would cause it to completely wipe the drives I select. I want to simply toggle these flags from RAID to AHCI so Intel Rapid Storage Technology doesn't fail and cause a BSOD upon booting, but without wiping the drives.

    Read the article

  • Oracle ADF Coverage at OOW

    - by Frank Nimphius
    Below is the schedule for all ADF related sessions at a glance. Note the Meet and greet session added for Wednesday Octiber 3rd from 4.30 pm to 5:30. Oracle ADF and Fusion Development General Session Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM General Session: The Future of Development for Oracle Fusion—From Desktop to Mobile to Cloud Marriott Marquis - Salon 8 12:15 PM - 1:15 PM General Session: Extend Oracle Fusion Apps to Tablets/Smartphones with Oracle Mobile Technology Moscone West - 3014 1:45 PM - 2:45 PM General Session: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies Moscone West - 3002/3004 4:45 PM - 5:45 PM General Session: Building Mobile Applications with Oracle Cloud Moscone West - 2002/2004 Conference Session Mon 1 Oct, 2012 Time Title Location 12:15 PM - 1:15 PM Understanding Oracle ADF and Its Role in Oracle Fusion Moscone South - 306 1:45 PM - 2:45 PM Building Performant Oracle ADF Business Components to Meet Tomorrow’s Needs Marriott Marquis - Golden Gate C3 3:15 PM - 4:15 PM End-to-End Oracle ADF Development in Eclipse Marriott Marquis - Golden Gate C3 4:45 PM - 5:45 PM Classic Mistakes with Oracle Application Development Framework Marriott Marquis - Salon 7 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM One Size Doesn’t Fit All: Oracle ADF Architecture Fundamentals Marriott Marquis - Golden Gate C2 10:15 AM - 11:15 AM Oracle Business Process Management/Oracle ADF Integration Best Practices Marriott Marquis - Golden Gate C3 11:45 AM - 12:45 PM Mobile-Enable Oracle Fusion Middleware and Enterprise Applications with Oracle ADF Moscone South - 306 11:45 AM - 12:45 PM Secrets of Successful Projects with Oracle Application Development Framework Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Develop On-Device iPhone and iPad Apps Without Writing Any Objective-C Code Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM BPM, SOA, and Oracle ADF Combined: Patterns Learned from Oracle Fusion Applications Moscone West - 3003 1:15 PM - 2:15 PM The Future of Forms Is … Oracle Forms (and Friends) Moscone South - 306 5:00 PM - 6:00 PM Best Practices for Integrating SOAP and REST Service into Oracle ADF Marriott Marquis - Golden Gate C2 Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Mobile Apps for Oracle E-Business Suite with Oracle ADF Mobile and Oracle SOA Suite Moscone West - 3001 10:15 AM - 11:15 AM Visualize This! Best Practices for Data Visualization in Desktop and Mobile Apps Marriott Marquis - Golden Gate C3 10:15 AM - 11:15 AM Set Up Your Oracle ADF Project and Development Team for Productivity: Seven Essential Tips Marriott Marquis - Golden Gate C2 11:45 AM - 12:45 PM How to Migrate an Oracle Forms Application to Oracle ADF Marriott Marquis - Golden Gate C2 1:15 PM - 2:15 PM Oracle ADF: Lessons Learned in Real-World Implementations Moscone South - 309 3:30 PM - 4:30 PM Oracle ADF Implementations Around the Globe: Best Practices Marriott Marquis - Golden Gate C2 3:30 PM - 4:30 PM Oracle Developer Cloud Services Marriott Marquis - Salon 7 4:30 PM - 5:30 PM Oracle JDeveloper and Oracle ADF: What’s New Hilton San Francisco - Continental Ballroom 5 5:00 PM - 6:00 PM Mobile Solutions for Oracle E-Business Suite Applications: Technical Insight Moscone West - 2020 5:00 PM - 6:00 PM Extending Social into Enterprise Applications and Business Processes Marriott Marquis - Golden Gate C3 5:00 PM - 6:00 PM The Tie That Binds: An Introduction to Oracle ADF Bindings Marriott Marquis - Golden Gate C2 Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Moscone West - 3003 11:15 AM - 12:15 PM Deep Dive into Oracle ADF: Advanced Techniques Marriott Marquis - Golden Gate C2 12:45 PM - 1:45 PM Monitor, Analyze, and Troubleshoot Your Oracle ADF Application Marriott Marquis - Golden Gate C2 2:15 PM - 3:15 PM Oracle WebCenter Portal: Creating and Using Content Presenter Templates Marriott Marquis - Golden Gate C2 HOL (Hands-on Lab) Mon 1 Oct, 2012 Time Title Location 10:45 AM - 11:45 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:45 PM - 2:45 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 3:15 PM - 4:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 4:45 PM - 5:45 PM Application Lifecycle Management with Oracle JDeveloper: Hands-on Lab Marriott Marquis - Salon 3/4 Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Wed 3 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 11:45 AM - 12:45 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 1:15 PM - 2:15 PM Build Mobile Applications for Oracle E-Business Suite Marriott Marquis - Salon 10A 3:30 PM - 4:30 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 5:00 PM - 6:00 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A Thur 4 Oct, 2012 Time Title Location 11:15 AM - 12:15 PM Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile: Hands-on Lab Marriott Marquis - Salon 10A 11:15 AM - 12:15 PM Introduction to Oracle ADF: Hands-on Lab Marriott Marquis - Salon 3/4 12:45 PM - 1:45 PM Oracle ADF for Java EE Developers with Oracle Enterprise Pack for Eclipse Marriott Marquis - Salon 3/4 BOF (Birds-of-a-Feather) Mon 1 Oct, 2012 Time Title Location 6:15 PM - 7:00 PM How to Get Started with Oracle ADF Marriott Marquis - Club Room 7:15 PM - 8:00 PM Building Next-Generation Applications with Oracle ADF and Oracle BPM Marriott Marquis - Golden Gate C3 7:15 PM - 8:00 PM The Future of Oracle Forms: Upgrade, Modernize, or Migrate? Marriott Marquis - Golden Gate C2 7:15 PM - 8:00 PM Oracle ADF Faces: One Site for Many Devices Marriott Marquis - Golden Gate C1 - User Group Forum (Sunday Only) Sun 30 Sept, 2012 Time Title Location 9:00 AM - 10:00 AM Oracle ADF Immersion: How an Oracle Forms Developer Immersed Himself in the Oracle ADF World Moscone South - 305 10:15 AM - 11:15 AM Deploy with Joy: Using Hudson to Build and Deploy Your Oracle ADF Applications Moscone South - 305 11:30 AM - 12:30 PM ADF EMG User Group: A Peek into the Oracle ADF Architecture of Oracle Fusion Applications Moscone South - 305 12:45 PM - 3:45 PM ADF EMG User Group: Oracle Fusion Middleware Live Application Development Demo Moscone South - 305 3:15 PM - 4:15 PM Mobile Development with Oracle JDeveloper and Oracle ADF Moscone West - 2010 Demos Demo Location Developer Moscone North, Upper Lobby - N-002 Oracle ADF Mobile Development Moscone North, Upper Lobby - N-001 Oracle Eclipse Projects Hilton San Francisco, Grand Ballroom - HHJ-008 Oracle Enterprise Pack for Eclipse Moscone South, Right - S-208 Oracle JDeveloper and Oracle ADF Moscone South, Right - S-207 Exhibits 0 Exhibitor Location Accenture Moscone South - 1813 Moscone South - 2221 Infosys Moscone South - 1701 Moscone South - SMR-005 Innowave Technology Moscone South - 2309 ODTUG Moscone West, Level 2 Lobby - Kiosk in the User Groups Pavilion Oracle ADF Developers Meet Up Wednesday, Oct 03 Time Activity Location 4:30 PM - 5:30 PM Stop by the OTN Lounge and meet other Oracle ADF & Fusion developers as well as product managers and engineers who work on Oracle ADF, ADF Mobile and ADF Essentials. Feedback and questions welcome, or simply stop by and say ‘hi!’ and enjoy free beer. OTN Lounge

    Read the article

  • Form, function and complexity in rule processing

    - by Charles Young
    Tim Bass posted on ‘Orwellian Event Processing’. I was involved in a heated exchange in the comments, and he has more recently published a post entitled ‘Disadvantages of Rule-Based Systems (Part 1)’. Whatever the rights and wrongs of our exchange, it clearly failed to generate any agreement or understanding of our different positions. I don't particularly want to promote further argument of that kind, but I do want to take the opportunity of offering a different perspective on rule-processing and an explanation of my comments. For me, the ‘red rag’ lay in Tim’s claim that “...rules alone are highly inefficient for most classes of (not simple) problems” and a later paragraph that appears to equate the simplicity of form (‘IF-THEN-ELSE’) with simplicity of function.   It is not the first time Tim has expressed these views and not the first time I have responded to his assertions.   Indeed, Tim has a long history of commenting on the subject of complex event processing (CEP) and, less often, rule processing in ‘robust’ terms, often asserting that very many other people’s opinions on this subject are mistaken.   In turn, I am of the opinion that, certainly in terms of rule processing, which is an area in which I have a specific interest and knowledge, he is often mistaken. There is no simple answer to the fundamental question ‘what is a rule?’ We use the word in a very fluid fashion in English. Likewise, the term ‘rule processing’, as used widely in IT, is equally difficult to define simplistically. The best way to envisage the term is as a ‘centre of gravity’ within a wider domain. That domain contains many other ‘centres of gravity’, including CEP, statistical analytics, neural networks, natural language processing and so much more. Whole communities tend to gravitate towards and build themselves around some of these centres. The term 'rule processing' is associated with many different technology types, various software products, different architectural patterns, the functional capability of many applications and services, etc. There is considerable variation amongst these different technologies, techniques and products. Very broadly, a common theme is their ability to manage certain types of processing and problem solving through declarative, or semi-declarative, statements of propositional logic bound to action-based consequences. It is generally important to be able to decouple these statements from other parts of an overall system or architecture so that they can be managed and deployed independently.  As a centre of gravity, ‘rule processing’ is no island. It exists in the context of a domain of discourse that is, itself, highly interconnected and continuous.   Rule processing does not, for example, exist in splendid isolation to natural language processing.   On the contrary, an on-going theme of rule processing is to find better ways to express rules in natural language and map these to executable forms.   Rule processing does not exist in splendid isolation to CEP.   On the contrary, an event processing agent can reasonably be considered as a rule engine (a theme in ‘Power of Events’ by David Luckham).   Rule processing does not live in splendid isolation to statistical approaches such as Bayesian analytics. On the contrary, rule processing and statistical analytics are highly synergistic.   Rule processing does not even live in splendid isolation to neural networks. For example, significant research has centred on finding ways to translate trained nets into explicit rule sets in order to support forms of validation and facilitate insight into the knowledge stored in those nets. What about simplicity of form?   Many rule processing technologies do indeed use a very simple form (‘If...Then’, ‘When...Do’, etc.)   However, it is a fundamental mistake to equate simplicity of form with simplicity of function.   It is absolutely mistaken to suggest that simplicity of form is a barrier to the efficient handling of complexity.   There are countless real-world examples which serve to disprove that notion.   Indeed, simplicity of form is often the key to handling complexity. Does rule processing offer a ‘one size fits all’. No, of course not.   No serious commentator suggests it does.   Does the design and management of large knowledge bases, expressed as rules, become difficult?   Yes, it can do, but that is true of any large knowledge base, regardless of the form in which knowledge is expressed.   The measure of complexity is not a function of rule set size or rule form.  It tends to be correlated more strongly with the size of the ‘problem space’ (‘search space’) which is something quite different.   Analysis of the problem space and the algorithms we use to search through that space are, of course, the very things we use to derive objective measures of the complexity of a given problem. This is basic computer science and common practice. Sailing a Dreadnaught through the sea of information technology and lobbing shells at some of the islands we encounter along the way does no one any good.   Building bridges and causeways between islands so that the inhabitants can collaborate in open discourse offers hope of real progress.

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270  | Next Page >