Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 268/331 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Readability and IF-block brackets: best practice

    - by MasterPeter
    I am preparing a short tutorial for level 1 uni students learning JavaScript basics. The task is to validate a phone number. The number must not contain non-digits and must be 14 digits long or less. The following code excerpt is what I came up with and I would like to make it as readable as possible. if ( //set of rules for invalid phone number phoneNumber.length == 0 //empty || phoneNumber.length > 14 //too long || /\D/.test(phoneNumber) //contains non-digits ) { setMessageText(invalid); } else { setMessageText(valid); } A simple question I can not quite answer myself and would like to hear your opinions on: How to position the surrounding (outermost) brackets? It's hard to see the difference between a normal and a curly bracket. Do you usually put the last ) on the same line as the last condition? Do you keep the first opening ( on a line by itself? Do you wrap each individual sub-condition in brackets too? Do you align horizontally the first ( with the last ), or do you place the last ) in the same column as the if? Do you keep ) { on a separate line or you place the last ) on the same line with the last sub-condition and then place the opening { on a new line? Or do you just put the ) { on the same line as the last sub-condition? Community wiki. EDIT Please only post opinions regarding the usage and placement of brackets. The code needs not be re-factored. This is for people who have only been introduced to JavaScript a couple of weeks ago. I am not asking for opinions how to write the code so it's shorter or performs better. I would only like to know how do you place brackets around IF-conditions.

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

  • applicationWillTerminate Appears to be Inconsistent

    - by Lauren Quantrell
    This one has me batty. In applicationWillTerminate I am doing two things: saving some settings to the app settings plist file and updating any changed data to the SQLite database referenced in the managedObjectContext. Problem is it works sometimes and not others. Same issue in the simulator and on the device. If I hit the home button while the app is running, I can only sometimes get the data to store in the plist and into the CoreData store. It seems that it's both works or neither works, and it makes no difference if I switch the execution order (saveState, managedObjectContext or managedObjectContext, saveState). I can't figure out how this can happen. Any help is greatly appreciated. lq AppDelegate.m @synthesize rootViewController; - (void)applicationWillTerminate:(UIApplication *)application { [rootViewController saveState]; NSError *error; if (managedObjectContext != nil) { if ([managedObjectContext hasChanges] && ![managedObjectContext save:&error]) { // Handle error exit(-1); // Fail } } } RootViewController.m - (void)saveState { NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setInteger:self.someInteger forKey:kSomeNumber]; [userDefaults setObject:self.someArray forKey:kSomeArray]; }

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • UITableViewCells of different heights place their accessoryViews at different X positions

    - by Jasarien
    Hey Guys, My app has some table cells that vary in height. The cells can also have a UIButton set to be a detail disclosure button (round, blue with arrow) as their accessory view. Depending on the height of the cell, the accessory view is positioned differently. At first I thought it was my layout code for my cell that was causing the problem, so I set up a quick independent test that uses vanilla UITableCells to remove the possibility that it could be my fault. I set up a view in interface builder, and just added a view table cells to the view, set their heights to different values and then added a detail disclosure button to each. Nothing more, nothing less. This is what I see: I added the size guides (thanks to Xscope) so you can see the difference in the accessory view x positions. The heights are: top 37px mid 68px bottom 44px (default, untouched height) If I increase the height any heigher than 68px the accessory view doesn't move any further to the left. Is this a bug? Is there any way I can prevent this from happening?

    Read the article

  • iPhone 4.0 on iPhone but still Ad-Hoc compile for 3.1.3?

    - by Mark
    Device: Version : 3.1 Build: 3511 Device: iPhone OS: iPhone OS 4.0 xCode 3.2.2 (Old) xCode 3.2.3 (New; For iPhone 4.0 Beta) Background: As you can see I installed 4.0 on my iPhone as I read on this forum it's really hard to near impossible to downgrade back to 3.1.3, but it's my only device I have and use for development. When I try to continue to develop and build with the old xCode it tells me that "No provisioned iPhone OS device is connected". When I select Simulator it does compile and build, however when I spread this file it does not work on the devices of my testers, they get a Signed error. When I run the new xCode, it does compile and build on the Device and when I spread this file, it does work on the devices of my testers (which are running the current official version 3.1.3). Questions: Why is there a difference between building for Simulator and Device? A simulator build never seem to work on the devices of my testers because of signing issues and the build for device does work. Currently it seems the old xCode became useless, however I read that you may not use the Beta xCode to build your application for release. So knowing the above how am I able to pull this off with my current setup due the fact the old xCode won't let me build properly.

    Read the article

  • HTTP 1.0 vs 1.1

    - by Jason Baker
    Could somebody give me a brief overview of the differences between HTTP 1.0 and HTTP 1.1? I've spent some time with both of the RFCs, but haven't been able to pull out a lot of difference between them. Wikipedia says this: HTTP/1.1 (1997-1999) Current version; persistent connections enabled by default and works well with proxies. Also supports request pipelining, allowing multiple requests to be sent at the same time, allowing the server to prepare for the workload and potentially transfer the requested resources more quickly to the client. But that doesn't mean a lot to me. I realize this is a somewhat complicated subject, so I'm not expecting a full answer, but can someone give me a brief overview of the differences at a bit lower level? By this I mean that I'm looking for the info I would need to know to implement either an HTTP server or application. I realize that this can be a somewhat complicated subject (based on what I know about HTTP as of right now), so I'm not necessarily looking for a full answer. I'm really more looking for a nudge in the right direction so that I can figure it out on my own.

    Read the article

  • Why This Maintainability Index Increase?

    - by Timothy
    I would be appreciative if someone could explain to me the difference between the following two pieces of code in terms of Visual Studio's Code Metrics rules. Why does the Maintainability Index increase slightly if I don't encapsulate everything within using ( )? Sample 1 (MI score of 71) public static String Sha1(String plainText) { using (SHA1Managed sha1 = new SHA1Managed()) { Byte[] text = Encoding.Unicode.GetBytes(plainText); Byte[] hashBytes = sha1.ComputeHash(text); return Convert.ToBase64String(hashBytes); } } Sample 2 (MI score of 73) public static String Sha1(String plainText) { Byte[] text, hashBytes; using (SHA1Managed sha1 = new SHA1Managed()) { text = Encoding.Unicode.GetBytes(plainText); hashBytes = sha1.ComputeHash(text); } return Convert.ToBase64String(hashBytes); } I understand metrics are meaningless outside of a broader context and understanding, and programmers should exercise discretion. While I could boost the score up to 76 with return Convert.ToBase64String(sha1.ComputeHash(Encoding.Unicode.GetBytes(plainText))), I shouldn't. I would clearly be just playing with numbers and it isn't truly any more readable or maintainable at that point. I am curious though as to what the logic might be behind the increase in this case. It's obviously not line-count.

    Read the article

  • Python Locking Implementation (with threading module)

    - by Matty
    This is probably a rudimentary question, but I'm new to threaded programming in Python and am not entirely sure what the correct practice is. Should I be creating a single lock object (either globally or being passed around) and using that everywhere that I need to do locking? Or, should I be creating multiple lock instances in each of the classes where I will be employing them. Take these 2 rudimentary code samples, which direction is best to go? The main difference being that a single lock instance is used in both class A and B in the second, while multiple instances are used in the first. Sample 1 class A(): def __init__(self, theList): self.theList = theList self.lock = threading.Lock() def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList): self.theList = theList self.lock = threading.Lock() self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": aList = [] for x in range(10): B(aList) A(aList).poll() Sample 2 class A(): def __init__(self, theList,lock): self.theList = theList self.lock = lock def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList,lock): self.theList = theList self.lock = lock self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": lock = threading.Lock() aList = [] for x in range(10): B(aList,lock) A(aList,lock).poll()

    Read the article

  • Microsoft OLE DB Provider for SQL Server error '80040e14' Could not find stored procedure

    - by BBlake
    I am migrating a classic ASP web app to new servers. The database back end is migrating from SQL Server 2000 to SQL Server 2008, and the app is moving from Win2000 x86 to Win2003R2 x64. I am getting the above error on every single stored procedure call within the application. I have verified: Yes, the SQL user is set up, using correct username and password Yes, the SQL user has execute permissions on the stored procedures in the database Yes, I have updated the TypeLib references to the new UUID Yes, I have logged into the database via SSMS with the SQL user id and it can see and execute the stored procedures just fine in SSMS, but not from the web app. Yes, the SQL user has the database set as its default database. The most frustrating thing is it works fine on the DEV server, but not on the production server. I have gone through every IIS setting 5 or 6 times and the web app is set up precisely the same in both environments. The only difference is the database server name in the connection string (DEV vs prod) EDIT: I have also tried pointing the prod web box at the dev database server and get the same error so I'm fairly sure the issue isn't on the database side.

    Read the article

  • MVC design question for forms

    - by kenny99
    Hi, I'm developing an app which has a large amount of related form data to be handled. I'm using a MVC structure and all of the related data is represented in my models, along with the handling of data validation from form submissions. I'm looking for some advice on a good way to approach laying out my controllers - basically I will have a huge form which will be broken down into manageable categories (similar to a credit card app) where the user progresses through each stage/category filling out the answers. All of these form categories are related to the main relation/object, but not to each other. Does it make more sense to have each subform/category as a method in the main controller class (which will make that one controller fairly massive), or would it be better to break each category into a subclass of the main controller? It may be just for neatness that the second approach is better, but I'm struggling to see much of a difference between either creating a new method for each category (which communicates with the model and outputs errors/success) or creating a new controller to handle the same functionality. Thanks in advance for any guidance!

    Read the article

  • What is wrong with this mail header text?

    - by dnagirl
    The following $header is being sent via PHP's mail($to,$subject,$content,$header) command. The mail arrives and appears to have an attachment. But the mail text is empty as is the file. I think this has something to do with line spacing but I can't see the problem. I have tried putting the contents (between the boundaries) in $contents rather than appending it to $header. It doesn't make a difference. Any thoughts? From: [email protected] Reply-To: [email protected] X-Mailer: PHP 5.3.1 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="7425195ade9d89bc7492cf520bf9f33a" --7425195ade9d89bc7492cf520bf9f33a Content-type:text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit this is a test message. --7425195ade9d89bc7492cf520bf9f33a Content-Type: application/pdf; name="test.pdf" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="test.pdf" JVBERi0xLjMKJcfsj6IKNSAwIG9iago8PC9MZW5ndGggNiAwIFIvRmlsdGVyIC9GbGF0ZURlY29k ZT4+CnN0cmVhbQp4nE2PT0vEMBDFadVdO4p/v8AcUyFjMmma5CqIIF5cctt6WnFBqLD1+4Np1nY3 c3lvfm+GyQ4VaUY11iQ2PTyuHG5/Ibdx9fIvhi3swJMZX24c602PTzENegwUbNAO4xcoCsG5xuWE . ... the rest of the file . MDAwMDA2MDYgMDAwMDAgbiAKMDAwMDAwMDcwNyAwMDAwMCBuIAowMDAwMDAxMDY4IDAwMDAwIG4g CjAwMDAwMDA2NDcgMDAwMDAgbiAKMDAwMDAwMDY3NyAwMDAwMCBuIAowMDAwMDAxMjg2IDAwMDAw IG4gCjAwMDAwMDA5MzIgMDAwMDAgbiAKdHJhaWxlcgo8PCAvU2l6ZSAxNCAvUm9vdCAxIDAgUiAv SW5mbyAyIDAgUgovSUQgWzxEMURDN0E2OUUzN0QzNjI1MDUyMEFFMjU0MTMxNTQwQz48RDFEQzdB NjlFMzdEMzYyNTA1MjBBRTI1NDEzMTU0MEM+XQo+PgpzdGFydHhyZWYKNDY5MwolJUVPRgo= --7425195ade9d89bc7492cf520bf9f33a-- $header ends without a line break

    Read the article

  • Source of UIView Implicit Animation delay?

    - by iPhoneToucher
    I have a block of UIView animation code that looks like this: [UIView beginAnimations:@"pushView" context:nil]; [UIView setAnimationDelay:0]; [UIView setAnimationDuration:.5]; [UIView setAnimationDelegate:self]; [UIView setAnimationWillStartSelector:@selector(animationWillStart)]; view.frame = CGRectMake(0, 0, 320, 416); [UIView commitAnimations]; The code basically mimics the animation of a ModalView presentation and is tied to a button on my interface. When the button is pressed, I get a long (.5 sec) delay (on iPod Touch...twice as fast on iPhone 3GS) before the animationWillStart: actually gets called. My app has lots going on besides this, but I've timed various points of my code and the delay definitely occurs at this block. In other words, a timestamp immediately before this code block and a timestamp when animationWillStart: gets called shows a .5 sec difference. I'm not too experienced with Core Animation and I'm just trying to figure out what the cause of the delay is...Memory use is stable when the animation starts and CoreAnimation FPS seems to be fine in Instruments. The view that gets animated does have upwards of 20 total subviews, but if that were the issue wouldn't it cause choppiness after the animation starts, rather than before? Any ideas?

    Read the article

  • How do I copy or move an NSManagedObject from one context to another?

    - by Aeonaut
    I have what I assume is a fairly standard setup, with one scratchpad MOC which is never saved (containing a bunch of objects downloaded from the web) and another permanent MOC which persists objects. When the user selects an object from scratchMOC to add to her library, I want to either 1) remove the object from scratchMOC and insert into permanentMOC, or 2) copy the object into permanentMOC. The Core Data FAQ says I can copy an object like this: NSManagedObjectID *objectID = [managedObject objectID]; NSManagedObject *copy = [context2 objectWithID:objectID]; (In this case, context2 would be permanentMOC.) However, when I do this, the copied object is faulted; the data is initially unresolved. When it does get resolved, later, all of the values are nil; none of the data (attributes or relationships) from the original managedObject are actually copied or referenced. Therefore I can't see any difference between using this objectWithID: method and just inserting an entirely new object into permanentMOC using insertNewObjectForEntityForName:. I realize I can create a new object in permanentMOC and manually copy each key-value pair from the old object, but I'm not very happy with that solution. (I have a number of different managed objects for which I have this problem, so I don't want to have to write and update copy: methods for all of them as I continue developing.) Is there a better way?

    Read the article

  • How to improve Java performance on Informix for Windows

    - by Michal Niklas
    I have problem with performance of Java UDR functions on Informix on Windows. On this server I already have some functions in C and SPL. I chose one function to write it in those 3 languages and I measured performance of this function on test table. Function calculates some kind of checksum so it does not use any db libraries etc. only string and math operations. I observed performance on 30k records with SQL like: select function(txt) from _tmp_perf_test and I changed function to 'function_c, function_spl or function_java. My performance tests showed that C function is the fastest, SPL function is about 5 times slower, where Java is 100 (one hundred!) times slower than C. I checked it few times and 1:100 ratio didn't improve. I changed Java function to simply return length of the string but even this do not help so it looks, that there is general problem with Java function invocation, because there was no difference in time between Java function that calculate checksum and Java function that returns length of the string. I increased JVM_MAX_HEAP_SIZE to 128 and it not helped too. I use IBM Informix Dynamic Server Version 11.50.TC6DE. The same test on Linux server: IBM Informix Dynamic Server Version 11.50.FC6 show more "normal" results, i.e. Java is slower from C and SPL but only 2 to 5 times. What can I do to improve Java performance on Informix server on Windows? More info about Java on servers: c:\Informix\extend\krakatoa\jre\bin>java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pwi32dev-20081129a (SR9-0 )) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Windows Server 2003 x86-32 j9vmwi3223-20081129 (JIT enabled) J9VM - 20081126_26240_lHdSMr JIT - 20081112_1511ifx1_r8 GC - 200811_07) JCL - 20081129 [root@informix11 bin]# ./java -version java version "1.5.0" Java(TM) 2 Runtime Environment, Standard Edition (build pxa64devifx-20071025 (SR6b)) IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 Linux amd64-64 j9vmxa6423-20071005 (JIT enabled) J9VM - 20071004_14218_LHdSMr JIT - 20070820_1846ifx1_r8 GC - 200708_10) JCL - 20071025

    Read the article

  • RelativeLayout differences between 1.5 and 2.1

    - by Kilnr
    I've got a ListView with items composed of RelativeLayouts. This is the relevant XML from the list items: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content"> <TextView android:id="@+id/xx" android:gravity="center_vertical" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:layout_centerInParent="true" android:layout_alignParentLeft="true"/> <TextView android:id="@+id/title" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_toRightOf="@id/xx" /> <TextView android:id="@+id/tag" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_toRightOf="@id/xx" android:layout_below="@id/title" /> <TextView android:id="@+id/subtitle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_toRightOf="@id/tag" android:layout_below="@id/title" /> </RelativeLayout> On Android 2.1 (tested on a Nexus One), this shows the desired behavior: On Android 1.5 however (tested on a HTC Hero), it shows up like this: [edit] On 1.6 (emulator), it works as expected as well. The small grey line on the top left is what shows up in the first pic as "xx", so that should be vertically centered. As far as I can see, the XML dictates this, but for some reason, 1.5 ignores it. Why is this? I can't find anything about this difference, and I've been brute forcing any combination of layout_center, center, alignParent*, but to no avail... Can anyone shed some light on this? Thanks!

    Read the article

  • C# Regex stops after first line matched

    - by JD Guzman
    Ok so I have a regex and I need it to find matches in a multiline string. This is the string I am using: Device Identifier: disk0 Device Node: /dev/disk0 Part of Whole: disk0 Device / Media Name: OCZ-VERTEX2 Media Volume Name: Not applicable (no file system) Mounted: Not applicable (no file system) File System: None Content (IOContent): GUID_partition_scheme OS Can Be Installed: No Media Type: Generic Protocol: SATA SMART Status: Verified Total Size: 240.1 GB (240057409536 Bytes) (exactly 468862128 512-Byte-Blocks) Volume Free Space: Not applicable (no file system) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Not applicable (no file system) Ejectable: No Whole: Yes Internal: Yes Solid State: Yes OS 9 Drivers: No Low Level Format: Not supported Basically I need to separate each line into two groups with the colon as the separator. The regex I am using is: @"([A-Za-z0-9\(\) \-\/]+):([A-Za-z0-9\(\) \-\/]+).*" It does work but only picks up the first line and separates it into the two groups like I want but it stops at that point. I have tried the Multiline option but it doesn't make any difference. I must admit I am new to the regex world. Any help is appreciated.

    Read the article

  • .NET CF -- Set Form height based on InputPanel state

    - by user354941
    Hi, So, I've got a C# project for Windows Mobile phones and I'm trying to work with the InputPanel. Specifically, I've got one form with a stack of Labels and TextBoxes that collect user input. I have an InputPanel that alerts me when the user opens the SIP. Everything works fine so far. When I get messages that the SIP status has changed, I want to change the height of the Form, which doesn't seem possible. Here's my event handler for my InputPanel: void m_InputPanel_EnabledChanged(object sender, EventArgs e) { // :( this assignment operation doesn't work and it doesn't this.ClientSize = inputPanel1.VisibleDesktop.Size; // doesn't work this.Size = inputPanel1.VisibleDesktop.Size; // assignment operation works, but isn't very useful this.visibleHeight = inputPanel1.VisibleDesktop.Height; this.InitializeUI(); } When I say that the assignment operation doesn't work, I mean that the values don't change in the debugger. I can understand that maybe I can't change the size of a Form, but I can't understand why trying to change it wouldn't throw an exception, or give a compiler error. I have my Form WindowState set to Normal instead of Maximized, but it doesn't make a difference. Also, I have read http://www.christec.co.nz/blog/archives/42 this page that tells me how I'm supposed to do this, but I can't easily put all of my controls in a Panel because I'm using a bunch of custom stuff to do alpha background controls.

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • Internet Explorer cannot 'fully' load ActiveX Control

    - by K Browne
    Context I am migrating an installer for an ActiveX control from Per-Machine to Per-User. I did this by programming the installer write to HKCU\Software\Classes instead of HKLM\Software\Classes. Problem On my machine (Windows 7 with UAC Enabled), the ActiveX control successfully loads. On the other windows 7 test machines (one with UAC enabled, one with UAC disabled), the control 'partially' loads. What is Partially? When a user visits a page with the ActiveX control, Internet Explorer displays a warning message in a yellow bar on the top of the window. If you click the 'Run add-on' button in the bar, the control becomes visible and begins to run, but Javascript code that tries to access properties of the control return the error: Library not registered. Differences between machines On the dev machine reads from HKCR\CLSID\<GUID> succeed while on the test machines these reads fail. Reads from HKCU succeed on both dev and test machines. Reads from HKLM fail on both test and dev machines. (I collected reads using Sysinternals Process Monitor) Strangely, the keys that Internet Explorer fails to read are clearly visible if I use regedit to view HKCR\CLSID\<GUID> on the test machines. Question What can I do to get the per-user control to load on the test machines? What could cause this difference between the dev machine and the test machines? Why can I see the key in HKCR with RegEdit but Internet Explorer cannot see the key? Any help is appreciated. Thank you.

    Read the article

  • iPhone OpenGL and NSTimer issues

    - by Kyle
    I have an NSTimer that runs at 60hz. With an OpenGL scene loaded and rendering, my game can get 60fps, solid, all day long.. Then if I go and recompile the app, or reload it, it will get 40fps. Same resources loaded. I've been running into this problem for years, and I just want to know why. It's crazy, and I want to know if I should just abandon this stupid Timer. Conditions are not different on my 3GS between loads. It will just get 40fps sometimes. Obviously the clockrate is not different between loads, so the performance figures should be constant given a constant scene. Here is a log of my framerates: A good load: :-) FrameRate: 61 FrameRate: 61 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 60 FrameRate: 61 FrameRate: 60 FrameRate: 61 Now, I'll go ahead and do nothing, recompile, and run: FrameRate: 43 FrameRate: 50 FrameRate: 45 FrameRate: 48 FrameRate: 40 FrameRate: 45 FrameRate: 42 FrameRate: 41 FrameRate: 42 FrameRate: 44 FrameRate: 41 FrameRate: 46 ^- Massive difference visually. What the flying heck could cause this? SAME area of the scene, SAME camera setup. No variables are different.

    Read the article

  • How do i refactor this code by using Action<t> or Func<t> delegates

    - by user330612
    I have a sample program, which needs to execute 3 methods in a particular order. And after executing each method, should do error handling. Now i did this in a normal fashion, w/o using delegates like this. class Program { public static void Main() { MyTest(); } private static bool MyTest() { bool result = true; int m = 2; int temp = 0; try { temp = Function1(m); } catch (Exception e) { Console.WriteLine("Caught exception for function1" + e.Message); result = false; } try { Function2(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function2" + e.Message); result = false; } try { Function3(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function3" + e.Message); result = false; } return result; } public static int Function1(int x) { Console.WriteLine("Sum is calculated"); return x + x; } public static int Function2(int x) { Console.WriteLine("Difference is calculated "); return (x - x); } public static int Function3(int x) { return x * x; } } As you can see, this code looks ugly w/ so many try catch loops, which are all doing the same thing...so i decided that i can use delegates to refactor this code so that Try Catch can be all shoved into one method so that it looks neat. I was looking at some examples online and couldnt figure our if i shud use Action or Func delegates for this. Both look similar but im unable to get a clear idea how to implement this. Any help is gr8ly appreciated. I'm using .NET 4.0, so im allowed to use anonymous methods n lambda expressions also for this Thanks

    Read the article

  • Version control - stubs and mocks

    - by Tesserex
    For the sake of this question, I don't care about the difference between stubs, mocks, dummies, fakes, etc. Let's say I'm working on a project with one other person. I'm working on component A and he is working on component B. They work together, so I stub out B for testing, and he stubs out A. We're working in a DVCS, let's say Git, because that's actually the case here. When it comes time to merge our components together, we need to get the "real" files from my A and his B, but throw away all the fake stuff. During development, it's likely (unless I need to learn how to properly stub things) that the fakes have the same file names and class names as the real thing. So my question is: what is the proper procedure for doing version control on the fakes, and how are the components correctly merged, making sure to grab the real thing and not the fake? I would guess that one way is just do the merge, expect it to say CONFLICT, and then manually delete all the fake code out of the half-merged files. But this sounds tedious and inefficient. Should the fake things not go under VC at all? Should they be ripped out just before merging? Sorry if the answer to this should be obvious or trivial, I'm just looking for a "suggested practice" here.

    Read the article

  • Windows console

    - by b-gen-jack-o-neill
    Hello. Well, I have a simple question, at least I hope its simple. I was interested in win32 console for a while. Our teacher told us, that windows console is just for DOS and real mode emulation purposes. Well, I know it is not true, becouse DOS applications are runned by emulator which only uses console to display output. Another thing I learned is that console is built into Windows since NT. Well. But what I could not find is, how actually are console programs written to use console. I use Visual C++ for programming (well, for learning). So, the only thing I need to do for using console is select console project. I first thought that windows decides wheather it run app in console or tries to run app in window mode. So I created win32 program and tried printf(). Well, I could not compile it. I know that by definition printf() prints text or variables to stdout. I also found that stdout is the console interface for output. But, I could not find what actually stdout is. So, basicly what I want to ask is, where is the difference between console app and win32 app. I thought that windows starts console when it gets command from "console-family" functions. But obvisously it does not, so there must be some code that actually commands windows to create console interface. And the second question is, when the console is created, how does windows recognize which console terminal is used for what app? I mean, what actually is stdout? Is it a area in memory , or some windows routine that is called? Thanks.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >