Search Results

Search found 5908 results on 237 pages for 'cody short'.

Page 187/237 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • My android tests don't get internet access!

    - by Malachii
    The subject says it all. My application gets internet access thanks to the android.permission.INTERNET permission, but my test cases don't while using the instrumentation test runner. This means I can't test my server IO routines in my test cases. What's up? Here's my manifest in case it helps you. Thanks! Sorry about the lack of indents - could not get it working on short notice with this site. Thanks! <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.helloandroid" android:versionCode="1" android:versionName="1.0"> <uses-permission android:name="android.permission.INTERNET"></uses-permission> <application android:icon="@drawable/icon" android:label="@string/app_name"> <uses-library android:name="android.test.runner" /> <activity android:name=".HelloAndroid" android:label="@string/app_name"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> <uses-sdk android:minSdkVersion="2" /> <instrumentation android:name="android.test.InstrumentationTestRunner" android:targetPackage="qnext.mobile.redirect" android:label="Qnext Redirect Tests" /> </manifest>

    Read the article

  • Finding Common Byte Sequences in MS SQL TEXT Column

    - by regex
    Hello All, Short Desc: I'm curious to see if I can use SQL Analysis services or some other MS SQL service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification. Thanks in advance for your help.

    Read the article

  • document.onclick settimeout function javascript help

    - by Jamex
    Hi, I have a document.onclick function that I would like to have a delay. I can't seem to get the syntax right. my original code is <script type="text/javascript"> document.onclick=check; function check(e){do something} I tried the below, but that code is incorrect, the function did not execute and nothing happened. <script type="text/javascript"> document.onclick=setTimeout("check", 1000); function check(e){do something} I tried the next set, the function got executed, but no delay. <script type="text/javascript"> setTimeout(document.onclick=check, 1000); function check(e){do something} what is the correct syntax for this code. TIA Edit: The solutions were all good, my problem was that I use the function check to obtain the id of the element being clicked on. But after the delay, there is no "memory" of what was being clicked on, so the rest of the function does not get executed. Jimr wrote the short code to preserve clicked event. The code that is working is // Delay execution of event handler function "f" by "time" ms. document.onclick = makeDelayedHandler(check, 250); function makeDelayedHandler( f, time) { return function( e ) {setTimeout(function() {f( e );}, time ); }; } function check(e){ var click = (e && e.target) || (event && event.srcElement); . . . Thank you all.

    Read the article

  • Deterministic and non uniform long string generation from seed

    - by Limonup
    I had this weird idea for an encryption that I wanted to try out, it may be bad, and it may have done before, but I'm just doing it for fun. The short version of the question is: Is it possible to generate a long, deterministic and non-uniformly distributed string/sequence of numbers from a small seed? Long(er) version: I was thinking to encrypt a text by changing encoding. The new encoding would be generated via Huffman algorithm. To work well, the Huffman algorithm would need a fairly long text with non uniform distribution. Then characters can have different bit-lengths which would be the primary strength of this encryption. The problem is that its impractical to enter in/remember a long text each time you want to decrypt the text. So I was wondering if it was possible to generate a text from password seed? It doesn't matter what the text is, as long as it has non uniform distribution of characters and that the exact same sequence can be recreated each time you give it the same seed. Preferably, are there any functions/extensions in Python that can do this? EDIT: To expand on the "strength" of varying bit length: if I have a string "test", ASCII values 116, 101, 115, 116, which gives bit values of 1110100 1100101 1110011 1110100 Then, say my Huffman algorithm generates encoding like t = 101 e = 1100111 s = 10001 The final string is 101 1100111 10001 101, if we encode this back to ASCII, we get 1011100 1111000 1101000, which is 3 entirely different characters. Obviously its impossible to perform any kind of frequency analysis or something like that on this.

    Read the article

  • Two column layout, navigation div on the right, solution from previous thread didn't seem to work

    - by Tom
    I tried the solution from this thread, but I must be missing something because it doesn't work: <div style="float:left;margin-right:200px"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p> </div> <div style="float:right;width:200px"> <p>navigation</p> </div> It works when the text in the content div (the left one) is short, but when it's long then the div takes up the whole width of the browser and the margin is there, but the right div is pushed below the first one nevertheless. What am I missing? Edit: The goal is to have a fix sized navigation column on the right of the browser window and the left div should get all the space left by the right navigation column (liquid layout).

    Read the article

  • symfony doctrine build-sql error

    - by user313571
    I have some big problems with symfony and doctrine at the beginning of a new project. I have created database diagram with mysql workbench, inserted the sql into phpmyadmin and then I've tried symfony doctrine:build-schema to generate the YAML schema. It generates a wrong schema (relations don't have on delete/on update) and after this I've tried symfony doctrine:build --sql and symfony doctrine:insert-sql The insert-sql statement generates error (can't create table ... failing query alter table add constraint ....), so I've decided to take a look over the generated sql and I've found out some differences between the sql generated from mysql workbench (which works perfect, including relations) and the sql generated by doctrine. I'll be short from now: I have to tables, EVENT and FORM and a 1 to n relation (each event may have multiple forms) so the correct constraint (generated with workbench) is ALTER TABLE `form` ADD CONSTRAINT `fk_form_event1` FOREIGN KEY (`event_id`) REFERENCES `event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE; doctrine generated statement is: ALTER TABLE event ADD CONSTRAINT event_id_form_event_id FOREIGN KEY (id) REFERENCES form(event_id); It's totally reversed and I am sure here is the error. What should I do? It's also correct like this?

    Read the article

  • IE7 contentEditable word wrapping

    - by Iker Jimenez
    I have the following code: <html> <style type="text/css"> DIV { display:inline; border: solid red 1px; } .editable { background:yellow; } </style> <div class="editable" contentEditable="true"> This is test text. This is test text.This is test text.This is test text.This is test text.Thihis is test text.This is test text.</div> <div class="editable" contentEditable="true"> short </div> <div class="editable" contentEditable="true"> This is test text.This is test text.This is test text.his is test text.Thihis is test text.Thihis is test text.Thihis is test text.Thi </div> And I need IE7 (IE6 not needed and FF3.x works fine) to wrap the text correctly, which it does if I remove the contentEditable="true" from the divs. Just try this code with and without contentEditable and you'll see what I mean. Make the browser window small enough so you see how the text wraps. Thanks.

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • Mutex example / tutorial ?

    - by Nav
    I've noticed that asking questions for the sake of creating a reference list etc. is encouraged in SO. This is one such question, so that anyone Googling for a mutex tutorial will find a good one here. I'm new to multithreading, and was trying to understand how mutexes work. Did a lot of Googling and this is the only decent tutorial I found, but it still left some doubts of how it works because I created my own program and the locking didn't work. One absolutely non-intuitive syntax of the mutex is pthread_mutex_lock( &mutex1 );, where it looks like the mutex is being locked, when what I really want to lock is some other variable. Does this syntax mean that locking a mutex locks a region of code until the mutex is unlocked? Then how do threads know that the region is locked? And isn't such a phenomenon supposed to be called critical section? In short, could you please help with the simplest possible mutex example program and the simplest possible explanation on the logic of how it works? I'm sure this will help plenty of other newbies.

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • What is the preferred way in C++ for converting a builtin type (int) to bool?

    - by Martin
    When programming with Visual C++, I think every developer is used to see the warning warning C4800: 'BOOL' : forcing value to bool 'true' or 'false' from time to time. The reason obviously is that BOOL is defined as int and directly assigning any of the built-in numerical types to bool is considered a bad idea. So my question is now, given any built-in numerical type (int, short, ...) that is to be interpreted as a boolean value, what is the/your preferred way of actually storing that value into a variable of type bool? Note: While mixing BOOL and bool is probably a bad idea, I think the problem will inevitably pop up whether on Windows or somewhere else, so I think this question is neither Visual-C++ nor Windows specific. Given int nBoolean; I prefer this style: bool b = nBoolean?true:false; The following might be alternatives: bool b = !!nBoolean; bool b = (nBoolean != 0); Is there a generally preferred way? Rationale? I should add: Since I only work with Visual-C++ I cannot really say if this is a VC++ specific question or if the same problem pops up with other compilers. So it would be interesting to specifically hear from g++ or users how they handle the int-bool case. Regarding Standard C++: As David Thornley notes in a comment, the C++ Standard does not require this behavior. In fact it seems to explicitly allow this, so one might consider this a VC++ weirdness. To quote the N3029 draft (which is what I have around atm.): 4.12 Boolean conversions [conv.bool] A prvalue of arithmetic, unscoped enumeration, pointer, or pointer to member type can be converted to a prvalue of type bool. A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true. (...)

    Read the article

  • What if you used the wrong language?

    - by HS
    A reply to another question made me remember a project from some years ago when it turned out that Java was not the right tool to use. I typically only learn a new language when I have a problem that it solves better than the ones I already know. [...] Then I write whatever program I wanted to learn that language for in the first place. [...] By the time I've gotten my target program written, I've usually got a decent handle on the language, not to mention any other features it has, and I've got other ideas to use it for. I did just that back then with Java, because the client thought it to be the right language to use (platform independent) and initial evaluation confirmed that. However, much later in the project there were some issue (can't really remember all the details by now). So, the project that started as a nice learning experience turned into a nightmare toward the end. I was at the brink of switching over to my trusted C++ and doing a complete rewrite. The client was not so much of a problem to convince back then, but my supervisor was strongly opposed because of all the work that already went into the Java version. In hindsight, he was right and the project was complete more or less with the intended features kind of working, but it was the project that I am least proud of by now. Long story short: what do you think, when is it too much and the switch to another technology is worthwhile? I personally would estimate the point of no return to be around 50% of the planned effort, but really want to know, if anyone has real experience with such a switch. And to answer the inevitable question: I do not really care, if the technology switched to is proven or another new thing. The latter would basically need more initial scrutiny based on the past experiences in the problematic project.

    Read the article

  • Using the contents of an array to set individual pixels in a Quartz bitmap context

    - by Magic Bullet Dave
    I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view. It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient. I guess the other question is should I use OPENGL ES instead? Thoughts/best practice would be much appreciated. Regards Dave OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working: - (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context { long bitmapData[WIDTH * HEIGHT]; // Build bitmap int i, j, h; for (i = 0; i < WIDTH; i++) { for (j = 0; j < HEIGHT; j++) { h = frameBuffer01[i][j]; bitmapData[i * j] = h; } } // Blit the bitmap to the context CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL); CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault); CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef); CGImageRelease(imageRef); CGColorSpaceRelease(colorSpaceRef); CGDataProviderRelease(providerRef); }

    Read the article

  • Getting Depth Value on Kinect SDK 1.6

    - by AlexanderPD
    this is my first try on Kinect and Kinect SDK so I'm having a lot of "newbie issues" :) my goal is to point my mouse on the Kinect standard video output and get the depth value. I already have both normal video and depth video outputs by using the 2 "Color Basic-WPF" and "Depth Basic-WPF" samples, and handling mouse events or position is not a problem. In fact i already did all and i already got a depth value, but this value is always HIGHLY imprecise. It jumps from 500 to 4000 by just moving to the next pixel in a plane surface. So.. I'm pretty sure I'm reading the depth value in the wrong way. This is how i read it: short debugValue = depthPixels[x*y].Depth; debug.Text = "X = "+x+", Y = "+y+", value = "+debugValue.ToString(); i know it's pretty out of context, this little piece of code is inside the same SensorDepthFrameReady function in "Depth Basic-WPF"! "x" and "y" are the mouse coordinates and depthPixels is DepthImagePixel[] type, a temporary array filled with the "depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);" instruction. Depth frame is filled here: DepthImageFrame depthFrame = e.OpenDepthImageFrame() the "e" comes from here: private void SensorDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) and this last one is called here: this.sensor.DepthFrameReady += this.SensorDepthFrameReady; how i must handle that depth value i get? I know the value must be between 800 and 4000 but i get values between about 500 and about 8000. i already google a lot (here on SO too) and i still can't understand if the depth value is 11 or 13 bit. The sdk examples uses shrink this value to 8 bit and this is making even more confusion in my head :(

    Read the article

  • Good PHP / MYSQL hashing solution for large number of text values

    - by Dave
    Short descriptio: Need hashing algorithm solution in php for large number of text values. Long description. PRODUCT_OWNER_TABLE serial_number (auto_inc), product_name, owner_id OWNER_TABLE owner_id (auto_inc), owener_name I need to maintain a database of 200000 unique products and their owners (AND all subsequent changes to ownership). Each product has one owner, but an owner may have MANY different products. Owner names are "Adam Smith", "John Reeves", etc, just text values (quite likely to be unicode as well). I want to optimize the database design, so what i was thinking was, every week when i run this script, it fetchs the owner of a proudct, then checks against a table i suppose similar to PRODUCT_OWNER_TABLE, fetching the owner_id. It then looks up owner_id in OWNER_TABLE. If it matches, then its the same, so it moves on. The problem is when its different... To optimize the database, i think i should be checking against the other "owner_name" entries in OWNER_TABLE to see if that value exists there. If it does, then i should use that owner_id. If it doesnt, then i should add another entry. Note that there is nothing special about the "name". as long as i maintain the correct linkagaes AND make the OWNER_TABLE "read-only, append-new" type table - I should be able create a historical archive of ownership. I need to do this check for 200000 entries, with i dont know how many unique owner names (~50000?). I think i need a hashing solution - the OWNER_TABLE wont be sorted, so search algos wont be optimal. programming language is PHP. database is MYSQL.

    Read the article

  • What is the best approach to 2D collision detection on the iPhone?

    - by Magic Bullet Dave
    Been working on this problem of collision detection and there appears to be 3 main approaches I could take: Sprite and mask approach. (AND the overlap of the sprites and check for a non-zero number in the resulting sprite pixel data). Bounding circles, rectangles or polygons. (Create one or more shapes that enclose the sprites and do the basic maths to check for overlaps). Use an existing sprite library. The first approach, even though it would have been the way I would have done it in the old days of 16x16 sprite blocks, it appears that there just isn’t an easy way of getting at the individual image pixel data and/or alpha channel within Quartz (or OPENGL for that matter). Detecting the overlap of the bounding box is easy, but then creating a 3rd image from the overlap and then testing it for pixels is complicated and my gut feel is that even if we could get it to work would be slow. Am I missing something neat here? The second approach involves dividing up our sprites into several polygons and testing them for overlaps. The more polygons the more accurate the collision detection. The benefit is that it is fast, and can be accurate. The downside is it makes the sprite creation more complicated. i.e., we have to create the polygons for each sprite. For speed the best approach is to create a tree of polygons. The 3rd approach I’m not sure about as it involves buying code (or using an open source licence). I am not sure what the best library to use is or whether this would make life easier or give us a problem integrating this into our app. So in short I am favouring the polygon and tree approach and would appreciate you views on this before I go and write lots of code. Best regards Dave

    Read the article

  • Form (or Formset?) to handle multiple table rows in Django

    - by Ben
    Hi, I'm working on my first Django application. In short, what it needs to do is to display a list of film titles, and allow users to give a rating (out of 10) to each film. I've been able to use the {{ form }} and {{ formset }} syntax in a template to produce a form which lets you rate one film at a time, which corresponds to one row in a MySQL table, but how do I produce a form that iterates over all the movie titles in the database and produces a form that lets you rate lots of them at once? At first, I thought this was what formsets were for, but I can't see any way to automatically iterate over the contents of a database table to produce items to go in the form, if you see what I mean. Currently, my views.py has this code: def survey(request): ScoreFormSet = formset_factory(ScoreForm) if request.method == 'POST': formset = ScoreFormSet(request.POST, request.FILES) if formset.is_valid(): return HttpResponseRedirect('/') else: formset = ScoreFormSet() return render_to_response('cf/survey.html', { 'formset':formset, }) And my survey.html has this: <form action="/survey/" method="POST"> <table> {{ formset }} </table> <input type = "submit" value = "Submit"> </form> Oh, and the definition of ScoreForm and Score from models.py are: class Score(models.Model): movie = models.ForeignKey(Movie) score = models.IntegerField() user = models.ForeignKey(User) class ScoreForm(ModelForm): class Meta: model = Score So, in case the above is not clear, what I'm aiming to produce is a form which has one row per movie, and each row shows a title, and has a box to allow the user to enter their score. If anyone can point me at the right sort of approach to this, I'd be most grateful. Thanks, Ben

    Read the article

  • CM and Agile validation process of merging to the Trunk?

    - by LoneCM
    Hello All, We are a new Agile shop and we are encountering an issue that I hope others have seen. In our process, the Trunk is considered an integration branch; it does not have to be releasable, but it does have to be stable and functional for others to branch off of. We create Feature branches of the Trunk for new development. All work and testing occurs in these branches. An individual branch pulls up as needed to stay integrated with the Trunk as other features that are accepted and are committed. But now we have numerous feature branches. Each are focused, have a short life cycle, and are pushed to the trunk as they are completed, so we not debating the need for the branches and trying very much to be Agile. My issue comes in here: I require that the branches pull up from the Trunk at the end of their life cycle and complete the validation, regression testing and handle all configuration issues before pushing to the trunk. Once reintegrated into the Trunk, I ask for at least a build and an automated smoke test. However, I am now getting push back on the Trunk validation. The argument is that the developers can merge the code and not need the QA validation steps because they already complete the work in the feature branch. Therefore, the extra testing is not needed. I have attempted to remind management of the numerous times "brainless" merges have failed. Thier solution is to instead of build and regression testing to have the developer diff the Feature branch and the newly merged Trunk. That process in thier mind would replace the regression testing I asked for. So what do you require when you reintegrate back to the Trunk? What are the issues that we will encounter if we remove this step and replace with the diff? Is the cost of staying Agile the additional work of the intergration of the branches? Thanks for any input. LoneCM

    Read the article

  • Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Is there a safe / standard way to manage unstructured memory in C++?

    - by andand
    I'm building a toy VM that requires a block of memory for storing and accessing data elements of different types and of different sizes. I've done this by writing a wrapper class around a uint8_t[] data block of the needed size. That class has some template methods to write / read typed data elements to / from arbitrary locations in the memory block, both of which check to make certain the bounds aren't violated. These methods use memmove in what I hope is a more or less safe manner. That said, while I am willing to press on in this direction, I've got to believe that other with more expertise have been here before and might be willing to share their wisdom. In particular: 1) Is there a class in one of the C++ standards (past, present, future) that has been defined to perform a function similar to what I have outlined above? 2) If not, is there a (preferably free as in beer) library out there that does? 3) Short of that, besides bounds checking and the inevitable issue of writing one type to a memory location and reading a different from that location, are there other issues I should be aware of? Thanks.-&&

    Read the article

  • Is there any way to filter certain things in pages served by IIS?

    - by Ruslan
    Hello, This is my first time posting here so please keep that in mind... I'll try to be short and get right to defining the problem. We have an ASP.NET 2 application (eCommerce package) running on IIS (Windows Server 2003). The main site's page(s) are using plain HTTP (no SSL), but the whole checkout process and the shopping cart page is using SSL (HTTPS). Now, the problem is that the site's header is located in a template file, and inside it it has a plain HTML 'img' tag calling an image with the "http://" portion hard-coded into it... This header appears on absolutely every page (including the https pages), and due to its insecure image tag, a warning box pops up in IE on every stage of the checkout process... Now, the problem: The live application cannot be touched in any way (no changes can be made to the template (so simply changing "http://" to "//" is not an option), IIS cannot be restarted, and the website/app pool cannot be restarted). Is there any way in the world (maybe plugin for IIS or a setting somewhere) that I can filter the pages right before they are served to replace the '<img src="http://example.com/image.jpg">' with '<img src="//example.com/image.jpg">' in the final HTML? Possibly via a regular expression or something? Thanks to everybody in advance.

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Correct Delphi compiler switches to stop in the user's code, not my component's

    - by Jeremy Mullin
    I'm modifying our VCL components so the end user's application links to our dcu files, instead of building our source code each time. We have everything working, but I want the debugger to stop on the user's code when an exception is raised. At first it would stop in our dcu and open the CPU window. I was able to prevent that by removing debug info from the dcu files. But now it still doesn't stop in the users code (like DevExpress libraries and others do). The following screencast is a short example. The first time I cause an exception in the DevExpress code, and the debugger correctly stops in my button event. The second time I cause an exception in my components, but the debugger doesn't have my button event on the call stack, and doesn't show me where the problem was. Any ideas why? http://screencast.com/t/NjhlOTRk Currently building the DCU's with these options: -$W+ -$D- -h -w -q Update: The TDataSet methods in between my component and the button event seem to cause this behavior. If I instead call a direct method of my table, I get the expected behavior. I'm guessing there isn't anything I can do about this, but I'm still curious why it happens.

    Read the article

  • From where starts the process' memory space and where does it end?

    - by nhaa123
    Hi, I'm trying to dump memory from my application where the variables lye. Here's the function: void MyDump(const void *m, unsigned int n) { const unsigned char *p = reinterpret_cast<const unsigned char *(m); char buffer[16]; unsigned int mod = 0; for (unsigned int i = 0; i < n; ++i, ++mod) { if (mod % 16 == 0) { mod = 0; std::cout << " | "; for (unsigned short j = 0; j < 16; ++j) { switch (buffer[j]) { case 0xa: case 0xb: case 0xd: case 0xe: case 0xf: std::cout << " "; break; default: std::cout << buffer[j]; } } std::cout << "\n0x" << std::setfill('0') << std::setw(8) << std::hex << (long)i << " | "; } buffer[i % 16] = p[i]; std::cout << std::setw(2) << std::hex << static_cast<unsigned int(p[i]) << " "; if (i % 4 == 0 && i != 1) std::cout << " "; } } Now, how can I know from which address starts my process memory space, where all the variables are stored? And how do I now, how long the area is? For instance: MyDump(0x0000 /* <-- Starts from here? */, 0x1000 /* <-- This much? */); Best regards, nhaa123

    Read the article

  • Design of std::ifstream class

    - by Nawaz
    Those of us who have seen the beauty of STL try to use it as much as possible, and also encourage others to use it wherever we see them using raw pointers and arrays. Scott Meyers have written a whole book on STL, with title Effective STL. Yet what happened to the developers of ifstream that they preferred char* over std::string. I wonder why the first parameter of ifstream::open() is of type const char*, instead of const std::string &. Please have a look at it's signature: void open(const char * filename, ios_base::openmode mode = ios_base::in ); Why this? Why not this: void open(const string & filename, ios_base::openmode mode = ios_base::in ); Is this a serious mistake with the design? Or this design is deliberate? What could be the reason? I don't see any reason why they have preferred char* over std::string. Note we could still pass char* to the latter function that takes std::string. That's not a problem! By the way, I'm aware that ifstream is a typedef, so no comment on my title.:P. It looks short that is why I used it. The actual class template is : template<class _Elem,class _Traits> class basic_ifstream;

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >