Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 604/670 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • Passing C++ object to C++ code through Python?

    - by cornail
    Hi all, I have written some physics simulation code in C++ and parsing the input text files is a bottleneck of it. As one of the input parameters, the user has to specify a math function which will be evaluated many times at run-time. The C++ code has some pre-defined function classes for this (they are actually quite complex on the math side) and some limited parsing capability but I am not satisfied with this construction at all. What I need is that both the algorithm and the function evaluation remain speedy, so it is advantageous to keep them both as compiled code (and preferrably, the math functions as C++ function objects). However I thought of glueing the whole simulation together with Python: the user could specify the input parameters in a Python script, while also implementing storage, visualization of the results (matplotlib) and GUI, too, in Python. I know that most of the time, exposing C++ classes can be done, e.g. with SWIG but I still have a question concerning the parsing of the user defined math function in Python: Is it possible to somehow to construct a C++ function object in Python and pass it to the C++ algorithm? E.g. when I call f = WrappedCPPGaussianFunctionClass(sigma=0.5) WrappedCPPAlgorithm(f) in Python, it would return a pointer to a C++ object which would then be passed to a C++ routine requiring such a pointer, or something similar... (don't ask me about memory management in this case, though :S) The point is that no callback should be made to Python code in the algorithm. Later I would like to extend this example to also do some simple expression parsing on the Python side, such as sum or product of functions, and return some compound, parse-tree like C++ object but let's stay at the basics for now. Sorry for the long post and thx for the suggestions in advance.

    Read the article

  • Pointer and malloc issue

    - by Andy
    I am fairly new to C and am getting stuck with arrays and pointers when they refer to strings. I can ask for input of 2 numbers (ints) and then return the one I want (first number or second number) without any issues. But when I request names and try to return them, the program crashes after I enter the first name and not sure why. In theory I am looking to reserve memory for the first name, and then expand it to include a second name. Can anyone explain why this breaks? Thanks! #include <stdio.h> #include <stdlib.h> void main () { int NumItems = 0; NumItems += 1; char* NameList = malloc(sizeof(char[10])*NumItems); printf("Please enter name #1: \n"); scanf("%9s", NameList[0]); fpurge(stdin); NumItems += 1; NameList = realloc(NameList,sizeof(char[10])*NumItems); printf("Please enter name #2: \n"); scanf("%9s", NameList[1]); fpurge(stdin); printf("The first name is: %s",NameList[0]); printf("The second name is: %s",NameList[1]); return 0; }

    Read the article

  • Gtk: How can I get a part of a file in a textview with scrollbars relating to the full file

    - by badgerman1
    I'm trying to make a very large file editor (where the editor only stores a part of the buffer in memory at a time), but I'm stuck while building my textview object. Basically- I know that I have to be able to update the text view buffer dynamically, and I don't know hot to get the scrollbars to relate to the full file while the textview contains only a small buffer of the file. I've played with Gtk.Adjustment on a Gtk.ScrolledWindow and ScrollBars, but though I can extend the range of the scrollbars, they still apply to the range of the buffer and not the filesize (which I try to set via Gtk.Adjustment parameters) when I load into textview. I need to have a widget that "knows" that it is looking at a part of a file, and can load/unload buffers as necessary to view different parts of the file. So far, I believe I'll respond to the "change_view" to calculate when I'm off, or about to be off the current buffer and need to load the next, but I don't know how to get the scrollbars to have the top relate to the beginning of the file, and the bottom relate to the end of the file, rather than to the loaded buffer in textview. Any help would be greatly appreciated, thanks!

    Read the article

  • Outlook VSTO AddIn for Meetings

    - by BigDubb
    We have created a VSTO addin for Outlook Meetings. As part of this we trap on the SendEvent of the message on the FormRegionShowing event: _apptEvents.Send += new Microsoft.Office.Interop.Outlook.ItemEvents_SendEventHandler(_apptEvents_Send); The method _apptEvents_Send then tests on a couple of properties and exits where appropriate. private void _apptEvents_Send(ref bool Cancel) { if (!_Qualified) { MessageBox.Show("Meeting has not been qualified", "Not Qualified Meeting", MessageBoxButtons.OK, MessageBoxIcon.Information); chkQualified.Focus(); Cancel = true; } } The problem that we're having is that some users' messages get sent twice. Once when the meeting is sent and a second time when the user re-opens outlook. I've looked for memory leaks, thinking that something might not be getting disposed of properly, and have added explicit object disposal on all finally calls to try and make sure resources are managed, but still getting the functionality incosistently across the organization. i.e. I never encountered the problem during development, nor other developers during testing. All users are up to date on framework (3.5 SP1) and Hotfixes for Outlook. Does anyone have any ideas on what might be causing this? Any ideas anyone might have would be greatly appreciated.

    Read the article

  • What is the most efficient way to handle points / small vectors in JavaScript?

    - by Chris
    Currently I'm creating an web based (= JavaScript) application thata is using a lot of "points" (= small, fixed size vectors). There are basically two obvious ways of representing them: var pointA = [ xValue, yValue ]; and var pointB = { x: xValue, y: yValue }; So translating my point a bit would look like: var pointAtrans = [ pointA[0] + 3, pointA[1] + 4 ]; var pointBtrans = { x: pointB.x + 3, pointB.y + 4 }; Both are easy to handle from a programmer point of view (the object variant is a bit more readable, especially as I'm mostly dealing with 2D data, seldom with 3D and hardly with 4D - but never more. It'll allways fit into x,y,z and w) But my question is now: What is the most efficient way from the language perspective - theoretically and in real implementations? What are the memory requirements? What are the setup costs of an array vs. an object? ... My target browsers are FireFox and the Webkit based ones (Chromium, Safari), but it wouldn't hurt to have a great (= fast) experience under IE and Opera as well...

    Read the article

  • Extra NotifyIcon shown in system tray

    - by Kettch19
    I'm having an issue with an app where my NotifyIcon displays an extra icon. The steps to reproduce it are easy, but the problem is that the extra icon shows up after any of the actual codebehind we've added fires. Put simply, clicking a button triggers execution of method FooBar() which runs all the way through fine but its primary duty is to fire a backgroundworker to log into another of our apps. It only appears if this particular button is clicked. Strangely enough, we have a WndProc method override and if I step through until the extra NotifyIcon appears, it always appears during this method so something else beyond the codebehind must be triggering the behavior. Our WndProc method is currently (although I don't think it's caused by the WndProc): Protected Overrides Sub WndProc(ByRef m As System.Windows.Forms.Message) 'Check for WM_COPYDATA message from other app or drag/drop action and handle message If m.Msg = NativeMethods.WM_COPYDATA Then ' get the standard message structure from lparam Dim CD As NativeMethods.COPYDATASTRUCT = m.GetLParam(GetType(NativeMethods.COPYDATASTRUCT)) 'setup byte array Dim B(CD.cbData) As Byte 'copy data from memory into array Runtime.InteropServices.Marshal.Copy(New IntPtr(CD.lpData), B, 0, CD.cbData) 'Get message as string and process ProcessWMCopyData(System.Text.Encoding.Default.GetString(B)) 'empty array Erase B 'set message result to 'true', meaning message handled m.Result = New IntPtr(1) End If 'pass on result and all messages not handled by this app MyBase.WndProc(m) End Sub The only place in the code where the NotifyIcon in question is manipulated at all is in the following event handler (again, don't think this is the culprit, but just for more info): Private Sub TrayIcon_MouseDoubleClick(ByVal sender As System.Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles TrayIcon.MouseDoubleClick If Me.Visible Then Me.Hide() Else PositionBottomRight() Me.Show() End If End Sub The backgroundworker's DoWork is as follows (just a class call to log in to our other app, but again just for info): Private Sub LoginBackgroundWorker_DoWork(ByVal sender As Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles LoginBackgroundWorker.DoWork Settings.IsLoggedIn = _wdService.LogOn(Settings.UserName, Settings.Password) End Sub Does anyone else have ideas on what might be causing this or how to possibly further debug this? I've been banging my head on this without seeing a pattern so another set of eyes would be extremely appreciated. :) I've posted this on MSDN winforms forums as well and have had no luck there so far either.

    Read the article

  • Why does a MFC application behaves mysteriously in encrypted hard drive environment.

    - by MauriceL
    I'm working on a bug where I have an MFC application that does weird things when installed in when Sophos Safeguard hard drive encryption is installed. I'm sorry to be so vague here, but I'm writing this away from the office so this is all from my (poor) memory. Three things I've noticed: AfxGetResourceHandle() doesn't return a consistent resource handle. There is a single case where we try to load a string resource, and for some reason, the resource handle that we get from this method is different to all the other stings. Can't construct a CDocumentTemplate. There is a trace error which I cant seem to recall. Will edit and post when I'm in tomorrow. This behaviour appears to manifest in a Visual Studio 2005 version of the project, but not a Visual Studio 2008 version. Unfortunately moving to the 2008 version is not an option. The bug is not always reproducable if I step through with a debugger. Also, bringing up debug message boxes changes the behaviuor, which leads me to think that either there is some kind of race condition going on with the way MFC events are being handled, but I'm not sure how I'll ever know for sure, or even what I can do about it if I did. I think there's some underlying reason that the app is behaving weirdly, but what I've posted are more symptoms. Can anyone think of what I should check for? I've run Windows update on the test environment to ensure everything was up to date, and I've examined the process in procmon to see if the disk encryption stuff was getting in the way and conflicting with files - it didn't appear to be, but it does appear to be involved in some way as our app accesses Sophos related paths in the temp directory.

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • Efficient and accurate way to compact and compare Python lists?

    - by daveslab
    Hi folks, I'm trying to a somewhat sophisticated diff between individual rows in two CSV files. I need to ensure that a row from one file does not appear in the other file, but I am given no guarantee of the order of the rows in either file. As a starting point, I've been trying to compare the hashes of the string representations of the rows (i.e. Python lists). For example: import csv hashes = [] for row in csv.reader(open('old.csv','rb')): hashes.append( hash(str(row)) ) for row in csv.reader(open('new.csv','rb')): if hash(str(row)) not in hashes: print 'Not found' But this is failing miserably. I am constrained by artificially imposed memory limits that I cannot change, and thusly I went with the hashes instead of storing and comparing the lists directly. Some of the files I am comparing can be hundreds of megabytes in size. Any ideas for a way to accurately compress Python lists so that they can be compared in terms of simple equality to other lists? I.e. a hashing system that actually works? Bonus points: why didn't the above method work?

    Read the article

  • Is programming overrated?

    - by aengine
    [Subjective and intended to be a community wiki] I am sorry for such an offensive question: But here are my arguments Most of the progress in "computing" has came from non-programming sources. i.e. People invented faster microprocessors and better routers and novel memory devices. I dont think on average people are writting more efficient programs than those written 10 years ago. And the newer and popular languages are infact slower than C. though speed is one of the lesser criterias. Most of the progress came from novel paradigms. Web, Internet, Cloud computing and Social networking are novel paradigms and did not involve progress in programming as such. Heck even facebook was written in PHP and not some extreme language. Though it did face scalability issues (same with twitter) but i believe money and better programmers (who came in much later) took care of that. Thus ideating capability trumped programming capability/ Even things like Map-Reduce, Column oriented database and Probablistic algorithms (E.g. bloom filters) came from hardcore Algorithms research, rather than some programming convention. Thus my final point is why programming skill is so overstressed? To point a recent example about how only 10% of programmers can "write code" (binary search) without debugging. Isnt it a bit hypocritical, considering your real successs lies in coming up with better algorithm or a novel feature rather than getting right first time???

    Read the article

  • Drawing line graphics leads Flash to spiral out of control!

    - by drpepper
    Hi, I'm having problems with some AS3 code that simply draws on a Sprite's Graphics object. The drawing happens as part of a larger procedure called on every ENTER_FRAME event of the stage. Flash neither crashes nor returns an error. Instead, it starts running at 100% CPU and grabs all the memory that it can, until I kill the process manually or my computer buckles under the pressure when it gets up to around 2-3 GB. This will happen at a random time, and without any noticiple slowdown beforehand. WTF? Has anyone seen anything like this? PS: I used to do the drawing within a MOUSE_MOVE event handler, which brought this problem on even faster. PPS: I'm developing on Linux, but reproduced the same problem on Windows. UPDATE: You asked for some code, so here we are. The drawing function looks like this: public static function drawDashedLine(i_graphics : Graphics, i_from : Point, i_to : Point, i_on : Number, i_off : Number) : void { const vecLength : Number = Point.distance(i_from, i_to); i_graphics.moveTo(i_from.x, i_from.y); var dist : Number = 0; var lineIsOn : Boolean = true; while(dist < vecLength) { dist = Math.min(vecLength, dist + (lineIsOn ? i_on : i_off)); const p : Point = Point.interpolate(i_from, i_to, 1 - dist / vecLength); if(lineIsOn) i_graphics.lineTo(p.x, p.y); else i_graphics.moveTo(p.x, p.y); lineIsOn = !lineIsOn; } } and is called like this (m_graphicsLayer is a Sprite): m_graphicsLayer.graphics.clear(); if (m_destinationPoint) { m_graphicsLayer.graphics.lineStyle(2, m_fixedAim ? 0xff0000 : 0x333333, 1); drawDashedLine(m_graphicsLayer.graphics, m_initialPos, m_destinationPoint, 10, 10); }

    Read the article

  • Is it not possible to make a C++ application "Crash Proof"?

    - by Enno Shioji
    Let's say we have an SDK in C++ that accepts some binary data (like a picture) and does something. Is it not possible to make this SDK "crash-proof"? By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data). I have no experience with C++, but when I googled, I found several means that sounded like a solution (use a vector instead of an array, configure the compiler so that automatic bounds check is performed, etc.). When I presented this to the developer, he said it is still not possible.. Not that I don't believe him, but if so, how is language like Java handling this? I thought the JVM performs everytime a bounds check. If so, why can't one do the same thing in C++ manually? UPDATE By "Crash proof" I don't mean that the application does not terminate. I mean it should not abruptly terminate without information of what happened (I mean it will dump core etc., but is it not possible to display a message like "Argument x was not valid" etc.?)

    Read the article

  • DB optimization to use it as a queue

    - by anony
    We have a table called worktable which has some columns(key(primary key), ptime, aname, status, content) we have something called producer which puts in rows in this table and we have consumer which does an order-by on the key column and fetches the first row which has status as 'pending'. The consumer does some processing on this row: 1. updates status to "processing" 2. does some processing using content 3. deletes the row we are facing contention issues when we try to run multiple consumers(probably due to the order-by which does a full table scan)... using Advanced queues would be our next step but before we go there we want to check what is the max throughput we can achieve with multiple consumers and producer on the table. What are the optimizations we can do to get the best numbers possible? Can we do an in-memory processing where a consumer fetches 1000 rows at a time processes and deletes? will that improve? What are other possibilities? partitioning of table? parallelization? Index organized tables?...

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • Question on passing a pointer to a structure in C to a function?

    - by worlds-apart89
    Below, I wrote a primitive singly linked list in C. Function "addEditNode" MUST receive a pointer by value, which, I am guessing, means we can edit the data of the pointer but can not point it to something else. If I allocate memory using malloc in "addEditNode", when the function returns, can I see the contents of first-next ? Second question is do I have to free first-next or is it only first that I should free? I am running into segmentation faults on Linux. #include <stdio.h> #include <stdlib.h> typedef struct list_node list_node_t; struct list_node { int value; list_node_t *next; }; void addEditNode(list_node_t *node) { node->value = 10; node->next = (list_node_t*) malloc(sizeof(list_node_t)); node->next->value = 1; node->next->next = NULL; } int main() { list_node_t *first = (list_node_t*) malloc(sizeof(list_node_t)); first->value = 1; first->next = NULL; addEditNode(first); free(first); return 0; }

    Read the article

  • Find messages from certain key till certain key while being able to remove stale keys.

    - by Alfred
    My problem Let's say I add messages to some sort of datastructure: 1. "dude" 2. "where" 3. "is" 4. "my" 5. "car" Asking for messages from index[4,5] should return: "my","car". Next let's assume that after a while I would like to purge old messages because they aren't useful anymore and I want to save memory. Let's say at time x messages[1-3] became stale. I assume that it would be most efficient to just do the deletion once every x seconds. Next my datastructure should contain: 4. "my" 5. "car" My solution? I was thinking of using a concurrentskiplistset or concurrentskiplist map. Also I was thinking of deleting the old messages from inside a newSingleThreadScheduledExecutor. I would like to know how you would implement(efficiently/thread-safe) this or maybe use a library?

    Read the article

  • MATLAB: What is an appropriate Data Structure for a Matrix with Random Variable Entries?

    - by user568249
    I'm currently working in an area that is related to simulation and trying to design a data structure that can include random variables within matrices. To motivate this let me say I have the following matrix: [a b; c d] I want to find a data structure that will allow for a, b, c, d to either be real numbers or random variables. As an example, let's say that a = 1, b = -1, c = 2 but let d be a normally distributed random variable with mean 0 and SD 1. The data structure that I have in mind will give no value to d. However, I also want to be able to design a function that can take in the structure, simulate an uniform(0,1), obtain a value for d using an inverse CDF and then spit out an actual matrix. I have several ideas to do this (all related to the MATLAB icdf function) but would like to know how more experienced programmers would do this. In this application, it's important that the structure is as "lean" as possible since I will be working with very very large matrices and memory will be an issue.

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Need help getting parent reference to child view controller

    - by Andy
    I've got the following code in one of my view controllers: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.section) { case 0: // "days" section tapped { DayPicker *dayPicker = [[DayPicker alloc] initWithStyle:UITableViewStylePlain]; dayPicker.rowLabel = self.activeDaysLabel; dayPicker.selectedDays = self.newRule.activeDays; [self.navigationController pushViewController:dayPicker animated:YES]; [dayPicker release]; break; ... Then, in the DayPicker controller, I do some stuff to the dayPicker.rowLabel property. Now, when the dayPicker is dismissed, I want the value in dayPicker.rowLabel to be used as the cell.textLabel.text property in the cell that called the controller in the first place (i.e., the cell label becomes the option that was selected within the DayPicker controller). I thought that by using the assignment operator to set dayPicker.rowLabel = self.activeDaysLabel, the two pointed to the same object in memory, and that upon dismissing the DayPicker, my first view controller, which uses self.activeDaysLabel as the cell.textLabel.text property for the cell in question, would automagically pick up the new value of the object. But no such luck. Have I missed something basic here, or am I going about this the wrong way? I originally passed a reference to the calling view controller to the child view controller, but several here told me that was likely to cause problems, being a circular reference. That setup worked, though; now I'm not sure how to accomplish the same thing "the right way." As usual, thanks in advance for your help.

    Read the article

  • Python 2.6 does not like appending to existing archives in zip files

    - by user313661
    Hello, In some Python unit tests of a program I'm working on we use in-memory zipfiles for end to end tests. In SetUp() we create a simple zip file, but in some tests we want to overwrite some archives. For this we do "zip.writestr(archive_name, zip.read(archive_name) + new_content)". Something like import zipfile from StringIO import StringIO def Foo(): zfile = StringIO() zip = zipfile.ZipFile(zfile, 'a') zip.writestr( "foo", "foo content") zip.writestr( "bar", "bar content") zip.writestr( "foo", zip.read("foo") + "some more foo content") print zip.read("bar") Foo() The problem is that this works fine in Python 2.4 and 2.5, but not 2.6. In Python 2.6 this fails on the print line with "BadZipfile: File name in directory "bar" and header "foo" differ." It seems that it is reading the correct file bar, but that it thinks it should be reading foo instead. I'm at a loss. What am I doing wrong? Is this not supported? I tried searching the web but could find no mention of similar problems. I read the zipfile documentation, but could not find anything (that I thought was) relevant, especially since I'm calling read() with the filename string. Any ideas? Thank you in advance!

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • C# cross thread dialogue co-operation

    - by John Attridge
    K I am looking at a primarily single thread windows forms application in 3.0. Recently my boss had a progress dialogue added on a separate thread so the user would see some activity when the main thread went away and did some heavy duty work and locked out the GUI. The above works fine unless the user switches applications or minimizes as the progress form sits top most and will not disappear with the main application. This is not so bad if there are lots of little operations as the event structure of the main form catches up with its events when it gets time so minimized and active flags can be checked and thus the dialog thread can hide or show itself accordingly. But if a long running sql operation kicks off then no events fire. I have tried intercepting the WndProc command but this also appears queued when a long running sql operation is executing. I have also tried picking up the processes, finding the current app and checking various memory values isiconic and the like inside the progress thread but until the sql operation finishes none of these get updated. Removing the topmost causes the dialog to disappear when another app activates but if the main app is then brought back it does not appear again. So I need a way to find out if the other thread is minimized or no longer active that does not involve querying the actual thread as that locks until the sql operation finishes. Now I know that this is not the best way to write this and it would be better to have all the heavy processing on separate threads leaving the GUI free but as this is a huge ancient legacy app the time to re-write in that fashion will not be provided so I have to work with what I have got. Any help is appreciated

    Read the article

  • Is it possible to cache all the data in a SQL Server CE database using LinqToSql?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >