Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 426/467 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • MouseWheel Event Fire

    - by Rahat
    I have a problem on calling my private method on MouseWheel event. In fact my mouse wheel event gets fired properly when i only increment a variable or display something in Title bar etc. But when i want to call a private method, that method gets called only one time which is not the requirement i want to call that method depending on the speed of scroll i.e. when scroll is done one time slowly call the private method one time but when the scroll is done in high speed call the private method more than one time depending on the scroll speed. For further explanation i am placing the sample code which displays the value of i in Title bar and add it in the Listbox control properly depending on the scroll speed but when i want to call the private method more than one time depending upon the scroll speed, that method gets called only one time. public partial class Form1 : Form { ListBox listBox1 = new ListBox(); int i = 0; public Form1() { InitializeComponent(); // Settnig ListBox control properties this.listBox1.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom) | System.Windows.Forms.AnchorStyles.Left) | System.Windows.Forms.AnchorStyles.Right))); this.listBox1.FormattingEnabled = true; this.listBox1.Location = new System.Drawing.Point(13, 13); this.listBox1.Name = "listBox1"; this.listBox1.Size = new System.Drawing.Size(259, 264); this.listBox1.TabIndex = 0; // Attaching Mouse Wheel Event this.listBox1.MouseWheel += new MouseEventHandler(Form1_MouseWheel); // Adding Control this.Controls.Add(this.listBox1); } void Form1_MouseWheel(object sender, MouseEventArgs e) { i++; this.Text = i.ToString(); this.listBox1.Items.Add(i.ToString()); // Uncomment the following line to call the private method // this method gets called only one time irrelevant of the // mouse wheel scroll speed. // this.LaunchThisEvent(); } private void Form1_Load(object sender, EventArgs e) { this.listBox1.Select(); } private void LaunchThisEvent() { // Display message each time // this method gets called. MessageBox.Show(i.ToString()); } } How to call the private method more than one time depending upon the speed of the mouse wheel scroll?

    Read the article

  • Drill down rss reader iphone

    - by bing
    Hi everyone, I have made a simple rss reader. The app loads an xml atom file in an array. Now I have added categories to my atom feed, which are first loaded in the array What is the best way to add drill down functionality programmatically. Now only the categories are loaded into the array and displayed. This is the implementation code ..... loading xml file <snip> ..... - (void)parserDidStartDocument:(NSXMLParser *)parser { NSLog(@"found file and started parsing"); } - (void)parser:(NSXMLParser *)parser parseErrorOccurred:(NSError *)parseError { NSString * errorString = [NSString stringWithFormat:@"Unable to download story feed from web site (Error code %i )", [parseError code]]; NSLog(@"error parsing XML: %@", errorString); UIAlertView * errorAlert = [[UIAlertView alloc] initWithTitle:@"Error loading content" message:errorString delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [errorAlert show]; } - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict{ //NSLog(@"found this element: %@", elementName); currentElement = [elementName copy]; if ([elementName isEqualToString:@"entry"]) { // clear out our story item caches... Categoryentry = [[NSMutableDictionary alloc] init]; currentID = [[NSMutableString alloc] init]; currentTitle = [[NSMutableString alloc] init]; currentSummary = [[NSMutableString alloc] init]; currentContent = [[NSMutableString alloc] init]; } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName{ //NSLog(@"ended element: %@", elementName); if ([elementName isEqualToString:@"entry"]) { // save values to an entry, then store that item into the array... [Categoryentry setObject:currentTitle forKey:@"title"]; [Categoryentry setObject:currentID forKey:@"id"]; [Categoryentry setObject:currentSummary forKey:@"summary"]; [Categoryentry setObject:currentContent forKey:@"content"]; [categories addObject:[Categoryentry copy]]; NSLog(@"adding category: %@", currentTitle); } } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string{ //NSLog(@"found characters: %@", string); // save the characters for the current item... if ([currentElement isEqualToString:@"title"]) { [currentTitle appendString:string]; } else if ([currentElement isEqualToString:@"id"]) { [currentID appendString:string]; } else if ([currentElement isEqualToString:@"summary"]) { [currentSummary appendString:string]; } else if ([currentElement isEqualToString:@"content"]) { [currentContent appendString:string]; } } - (void)parserDidEndDocument:(NSXMLParser *)parser { [activityIndicator stopAnimating]; [activityIndicator removeFromSuperview]; NSLog(@"all done!"); NSLog(@"categories array has %d entries", [categories count]); [newsTable reloadData]; }

    Read the article

  • C#/.NET Project - Am I setting things up correctly?

    - by JustLooking
    1st solution located: \Common\Controls\Controls.sln and its project: \Common\Controls\Common.Controls\Common.Controls.csproj Description: This is a library that contains this class: public abstract class OurUserControl : UserControl { // Variables and other getters/setters common to our UserControls } 2nd solution located: \AControl\AControl.sln and its project: \AControl\AControl\AControl.csproj Description: Of the many forms/classes, it will contain this class: using Common.Controls; namespace AControl { public partial class AControl : OurUserControl { // The implementation } } A note about adding references (not sure if this is relevant): When I add references (for projects I create), using the names above: 1. I add Common.Controls.csproj to AControl.sln 2. In AControl.sln I turn off the build of Common.Controls.csproj 3. I add the reference to Common.Controls (by project) to AControl.csproj. This is the (easiest) way I know how to get Debug versions to match Debug References, and Release versions to match Release References. Now, here is where the issue lies (the 3rd solution/project that actually utilizes the UserControl): 3rd solution located: \MainProj\MainProj.sln and its project: \MainProj\MainProj\MainProj.csproj Description: Here's a sample function in one of the classes: private void TestMethod<T>() where T : Common.Controls.OurUserControl, new() { T TheObject = new T(); TheObject.OneOfTheSetters = something; TheObject.AnotherOfTheSetters = something_else; // Do stuff with the object } We might call this function like so: private void AnotherMethod() { TestMethod<AControl.AControl>(); } This builds, runs, and works. No problem. The odd thing is after I close the project/solution and re-open it, I have red squigglies everywhere. I bring up my error list and I see tons of errors (anything that deals with AControl will be noted as an error). I'll see errors such as: The type 'AControl.AControl' cannot be used as type parameter 'T' in the generic type or method 'MainProj.MainClass.TestMethod()'. There is no implicit reference conversion from 'AControl.AControl' to 'Common.Controls.OurUserControl'. or inside the actual method (the properties located in the abstract class): 'AControl.AControl' does not contain a definition for 'OneOfTheSetters' and no extension method 'OneOfTheSetters' accepting a first argument of type 'AControl.AControl' could be found (are you missing a using directive or an assembly reference?) Meanwhile, I can still build and run the project (then the red squigglies go away until I re-open the project, or close/re-open the file). It seems to me that I might be setting up the projects incorrectly. Thoughts?

    Read the article

  • Is there a way to enforce/preserve order of XML elements in an XML Schema?

    - by MarcoS
    Let's consider the following XML Schema: <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="http://www.example.org/library" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:lib="http://www.example.org/library"> <element name="library" type="lib:libraryType"></element> <complexType name="libraryType"> <sequence> <element name="books" type="lib:booksType"></element> </sequence> </complexType> <complexType name="booksType"> <sequence> <element name="book" type="lib:bookType" maxOccurs="unbounded" minOccurs="1"></element> </sequence> </complexType> <complexType name="bookType"> <attribute name="title" type="string"></attribute> </complexType> </schema> and a corresponding XML example: <?xml version="1.0" encoding="UTF-8"?> <lib:library xmlns:lib="http://www.example.org/library" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.example.org/library src/library.xsd "> <lib:books> <lib:book title="t1"/> <lib:book title="t2"/> <lib:book title="t3"/> </lib:books> </lib:library> Is there a way to guarantee that the order of <lib:book .../> elements is preserved? I want to be sure that any parser reading the XML will return books in the specified oder, that is first the book with title="t1", then the book with title="t2", and finally the book with title="t3". As far as I know XML parsers are not required to preserve order. I wonder whether one can enforce this through XML Schema? One quick solution for me would be adding an index attribute to the <lib:book .../> element, and delegate order preservation to the application reading the XML. Comments? Suggestions?

    Read the article

  • App losing db connection

    - by DaveKub
    I'm having a weird issue with an old Delphi app losing it's database connection. Actually, I think it's losing something else that then makes the connection either drop or be unusable. The app is written in Delphi 6 and uses the Direct Oracle Access component (v4.0.7.1) to connect to an Oracle 9i database. The app runs as a service and periodically queries the db using a TOracleQuery object (qryAlarmList). The method that is called to do this looks like this: procedure TdmMain.RefreshAlarmList; begin try qryAlarmList.Execute; except on E: Exception do begin FStatus := ssError; EventLog.LogError(-1, 'TdmMain.RefreshAlarmList', 'Message: ' + E.Message); end; end; end; It had been running fine for years, until a couple of Perl scripts were added to this machine. These scripts run every 15 minutes and look for datafiles to import into the db, and then they do a some calculations and a bunch of reads/writes to/from the db. For some reason, when they are processing large amounts of data, and then the Delphi app tries to query the db, the Delphi app throws an exception at the "qryAlarmList.Execute" line in the above code listing. The exception is always: Access violation at address 00000000. read of address 00000000 HOW can something that the Perl scripts are doing cause this?? There are other Perl scripts on this machine that load data using the same modules and method calls and we didn't have problems. To make it even weirder, there are two other apps that will also suddenly lose their ability to talk to the database at the same time as the Perl stuff is running. Neither of those apps run on this machine, but both are Delphi 6 apps that use the same DOA component to connect to the same database. We have other apps that connect to the same db, written in Java or C# and they don't seem to have any problems. I've tried adding code before the '.Execute' method is called to: check the session's connection (session.CheckConnection(true); always comes back as 'ccOK'). see whether I can access a field of the qryAlarmList object to see if maybe it's become null; can access it fine. check the state of the qryAlarmList; always says it's qsIdle. Does anyone have any suggestions of something to try? This is driving me nuts! Dave

    Read the article

  • Google Maps API 3 How to call initialize without putting it in Body onload

    - by Bex
    Hi I am using the google maps API and have copied the examples and have ended up with a function called "initialize" that is called from the body onload. I am using the maps in a few different user controls, which are placed within content place holders, so the body tag is in the master page. Is there a way of calling initialize directly in the usercontrol rather than having to place an onload on the masterpage? Ideally I want my user control to be a stand alone control that I can just slot into pages without trying to access the master page body onload. I have tried calling the Initialize function from my page load of the user control (by adding a start up script), but the map doesn't appear. Any suggestions? My code: <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false">/script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"> var map; var geocoder; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(51.8052184317649, -4.965819906250006); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); $.ajax({ type: "POST", url: "/GoogleMapsService.asmx/GetPointers", contentType: "application/json; charset=utf-8", dataType: "json", beforeSend: function () { $(".loadingData").html("<p>Loading data..</p>"); }, complete: function () { $(".loadingData").html(""); }, cache: true, success: mapPoints, error: onError }); } function onError(xhr, ajaxOptions, thrownError) { alert(xhr.status); alert(xhr.responseText); } function mapPoints(response) { if (response.d != null) { if (response.d.length > 0) { for (var i = 0; i < response.d.length; i++) { plotOnMap(response.d[i].Id, response.d[i].Name, response.d[i].Lat, response.d[i].Long, response.d[i].ShortDesc) } } } } and on my test master page: <body onload="initialize()"> <form runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true"></asp:ScriptManager> <asp:ContentPlaceHolder ID="MainContent" runat="server"> </asp:ContentPlaceHolder> </form> </body>

    Read the article

  • UCA + Natural Sorting

    - by Alix Axel
    I recently learnt that PHP already supports the Unicode Collation Algorithm via the intl extension: $array = array ( 'al', 'be', 'Alpha', 'Beta', 'Álpha', 'Àlpha', 'Älpha', '????', 'img10.png', 'img12.png', 'img1.png', 'img2.png', ); if (extension_loaded('intl') === true) { collator_asort(collator_create('root'), $array); } Array ( [0] => al [2] => Alpha [4] => Álpha [5] => Àlpha [6] => Älpha [1] => be [3] => Beta [11] => img1.png [9] => img10.png [8] => img12.png [10] => img2.png [7] => ???? ) As you can see this seems to work perfectly, even with mixed case strings! The only drawback I've encountered so far is that there is no support for natural sorting and I'm wondering what would be the best way to work around that, so that I can merge the best of the two worlds. I've tried to specify the Collator::SORT_NUMERIC sort flag but the result is way messier: collator_asort(collator_create('root'), $array, Collator::SORT_NUMERIC); Array ( [8] => img12.png [7] => ???? [9] => img10.png [10] => img2.png [11] => img1.png [6] => Älpha [5] => Àlpha [1] => be [2] => Alpha [3] => Beta [4] => Álpha [0] => al ) However, if I run the same test with only the img*.png values I get the ideal output: Array ( [3] => img1.png [2] => img2.png [1] => img10.png [0] => img12.png ) Can anyone think of a way to preserve the Unicode sorting while adding natural sorting capabilities?

    Read the article

  • Permanent mutex locking causing deadlock?

    - by Daniel
    I am having a problem with mutexes (pthread_mutex on Linux) where if a thread locks a mutex right again after unlocking it, another thread is not very successful getting a lock. I've attached test code where one mutex is created, along with two threads that in an endless loop lock the mutex, sleep for a while and unlock it again. The output I expect to see is "alive" messages from both threads, one from each (e.g. 121212121212. However what I get is that one threads gets the majority of locks (e.g. 111111222222222111111111 or just 1111111111111...). If I add a usleep(1) after the unlocking, everything works as expected. Apparently when the thread goes to SLEEP the other thread gets its lock - however this is not the way I was expecting it, as the other thread has already called pthread_mutex_lock. I suspect this is the way this is implemented, in that the actice thread has priority, however it causes certain problem in this particular testcase. Is there any way to prevent it (short of adding a deliberately large enough delay or some kind of signaling) or where is my error in understanding? #include <pthread.h> #include <stdio.h> #include <string.h> #include <sys/time.h> #include <unistd.h> pthread_mutex_t mutex; void* threadFunction(void *id) { int count=0; while(true) { pthread_mutex_lock(&mutex); usleep(50*1000); pthread_mutex_unlock(&mutex); // usleep(1); ++count; if (count % 10 == 0) { printf("Thread %d alive\n", *(int*)id); count = 0; } } return 0; } int main() { // create one mutex pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutex_init(&mutex, &attr); // create two threads pthread_t thread1; pthread_t thread2; pthread_attr_t attributes; pthread_attr_init(&attributes); int id1 = 1, id2 = 2; pthread_create(&thread1, &attributes, &threadFunction, &id1); pthread_create(&thread2, &attributes, &threadFunction, &id2); pthread_attr_destroy(&attributes); sleep(1000); return 0; }

    Read the article

  • Settings designer file complains when protecting configuration for connectionStrings in App.Config i

    - by Joe
    Hi, I am trying to encrypt Configuration Information Using Protected Configuration in Visual Studio 2010. I have the following info speicifed in the App.Config file: <connectionStrings configProtectionProvider="TheProviderName"> <EncryptedData> <CipherData> <CipherValue>VALUE GOES HERE</CipherValue> </CipherData> </EncryptedData> </connectionStrings> <appSettings configProtectionProvider="TheProviderName"> <EncryptedData> <CipherData> <CipherValue>VALUE GOES HERE</CipherValue> </CipherData> </EncryptedData> </appSettings> However, when I then go to the Settings area of the Projects Properties to view the settings in the Designer, I get prompted with the following error "An error occured while reading the App.config file. The file might be corrupted or contain invalid XML." I understand that my changes are causing the error, however, is there anyway I can bypass that the information is not read into at design view? (Of course the best way would be to make the tags be recognized by the designer, is there any way to do this?) I tried adding <connectionStrings configProtectionProvider="TheProviderName" xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> to connectionStrings as well as to the appSettings, but with no luck, the intellisense is bypassed in the config file, but the designer still complains. I would be satisfied if the designer would not complain about this "error", which is not actually an error because Microsoft states here that it should work. ASP.NET 2.0 provides a new feature, called protected configuration, that enables you to encrypt sensitive information in a configuration file. Although primarily designed for ASP.NET, protected configuration can also be used to encrypt configuration file sections in Windows applications. For a detailed description of the new protected configuration capabilities, see Encrypting Configuration Information Using Protected Configuration. And yes, it does work to encrypt it and to decrypt it and use it, it is just very annoying and frustrating that the designer complains about it. Anyone who knows which xsd file that is used (if used) to verify the contents of the App.config file in the design view? Any help appreciated.

    Read the article

  • How can I connect to MSMQ over a workgroup?

    - by cyclotis04
    I'm writing a simple console client-server app using MSMQ. I'm attempting to run it over the workgroup we have set up. They run just fine when run on the same computer, but I can't get them to connect over the network. I've tried adding Direct=, OS:, and a bunch of combinations of other prefaces, but I'm running out of ideas, and obviously don't know the right way to do it. My queue's don't have GUIDs, which is also slightly confusing. Whenever I attempt to connect to a remote machine, I get an invalid queue name message. What do I have to do to make this work? Server: class Program { static string _queue = @"\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static void Main(string[] args) { _queue = Dns.GetHostName() + _queue; lock (_mqLock) { if (!MessageQueue.Exists(_queue)) _mq = MessageQueue.Create(_queue); else _mq = new MessageQueue(_queue); } Console.Write("Starting server at {0}:\n\n", _mq.Path); _mq.Formatter = new BinaryMessageFormatter(); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } _mq.Close(); } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write(msg.Body); } catch (Exception ex) { Console.Write("\n" + ex.Message + "\n"); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } Client: class Program { static MessageQueue _mq; static void Main(string[] args) { string queue; while (_mq == null) { Console.Write("Enter the queue name:\n"); queue = Console.ReadLine(); //queue += @"\Private$\qim"; try { if (MessageQueue.Exists(queue)) _mq = new MessageQueue(queue); } catch (Exception ex) { Console.Write("\n" + ex.Message + "\n"); _mq = null; } } Console.Write("Connected. Begin typing.\n\n"); _mq.Formatter = new BinaryMessageFormatter(); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } }

    Read the article

  • Upgrading Entity Framework 1.0 to 4.0 to include foreign keys

    - by duthiega
    Currently I've been working with Entity Framework 1.0 which is located under a service façade. Below is one of the save methods I've created to either update or insert the device in question. This currently works but, I can't help feel that its a bit of a hack having to set the referenced properties to null then re-attach them just to get an insert to work. The changedDevice already holds these values, so why do I need to assign them again. So, I thought I'll update the model to EF4. That way I can just directly access the foreign keys. However, on doing this I've found that there doesn't seem to be an easy way to add the foreign keys except by removing the entity from the diagram and re-adding it. I don't want to do this as I've already been through all the entity properties renaming them from the DB column names. Can anyone help? /// <summary> /// Saves the non network device. /// </summary> /// <param name="nonNetworkDeviceDto">The non network device dto.</param> public void SaveNonNetworkDevice(NonNetworkDeviceDto nonNetworkDeviceDto) { using (var context = new AssetNetworkEntities2()) { var changedDevice = TransformationHelper.ConvertNonNetworkDeviceDtoToEntity(nonNetworkDeviceDto); if (!nonNetworkDeviceDto.DeviceId.Equals(-1)) { var originalDevice = context.NonNetworkDevices.Include("Status").Include("NonNetworkType").FirstOrDefault( d => d.DeviceId.Equals(nonNetworkDeviceDto.DeviceId)); context.ApplyAllReferencedPropertyChanges(originalDevice, changedDevice); context.ApplyCurrentValues(originalDevice.EntityKey.EntitySetName, changedDevice); } else { var maxNetworkDevice = context.NonNetworkDevices.OrderBy("it.DeviceId DESC").First(); changedDevice.DeviceId = maxNetworkDevice.DeviceId + 1; var status = changedDevice.Status; var nonNetworkType = changedDevice.NonNetworkType; changedDevice.Status = null; changedDevice.NonNetworkType = null; context.AttachTo("DeviceStatuses", status); if (nonNetworkType != null) { context.AttachTo("NonNetworkTypes", nonNetworkType); } changedDevice.Status = status; changedDevice.NonNetworkType = nonNetworkType; context.AddToNonNetworkDevices(changedDevice); } context.SaveChanges(); } }

    Read the article

  • Programmatically created GridView cells don't scale to fit screen

    - by ChrisAshton84
    I've read a ton of other responses about GridView already but almost all deal with the XML format (which I had working). I wanted to learn the programmatic way of designing Android, though, so I'm trying to build most of this app without XML. All I define in XML are the GridView and the first TextView. After that I add the other LinearLayouts in onCreate(). I would like to have a 2 column GridView containing a title and several (4 for now) LinearLayouts. I realize from documentation that the GridView won't scale cells unless they have a gravity set, but no matter how I try to do this I can't get it to work. After adding two cells, my GridView tree would look like: GridView -> TextView (colspan 2) -> LinearLayout (Vertical) -> TextView -> LinearLayout (Horizontal) -> TextView -> TextView -> LinearLayout (Horizontal) -> TextView -> TextView -> LinearLayout (Vertical) -> TextView -> LinearLayout (Horizontal) -> TextView -> TextView -> LinearLayout (Horizontal) -> TextView -> TextView I've tried about every combination of FILL and FILL_HORIZONTAL I could think of on either the outermost LinearLayouts, or also trying on the TextViews and inner LinearLayouts. No matter what I do, the LinearLayouts I add are always sized as small as possible and pushed to the left of the screen. Meanwhile, the first TextView (the colspan 2 one) with only CENTER_HORIZONTAL set is correctly centered in the screen. Its as if that TextView gets one idea of the column widths and the LinearLayouts get another! (If I add the FILL Gravity for it, it also moves all the way left.) I believe I had this working accidentally with 100% XML, but I would prefer not to switch back unless this is known to not work programatically. Any ideas what I can try to get this working?

    Read the article

  • Backbone.js (model instanceof Model) via Chrome Extension

    - by Leoncelot
    Hey guys, This is my first time ever posting on this site and the problem I'm about to pose is difficult to articulate due to the set of variables required to arrive at it. Let me just quickly explain the framework I'm working with. I'm building a Chrome Extension using jQuery, jQuery-ui, and Backbone The entire JS suite for the extension is written in CoffeeScript and I'm utilizing Rails and the asset pipeline to manage it all. This means that when I want to deploy my extension code I run rake assets:precompile and copy the resulting compressed JS to my extensions Directory. The nice thing about this approach is that I can actually run the extension js from inside my Rails app by including the library. This is basically the same as my extensions background.js file which injects the js as a content script. Anyway, the problem I've recently encountered was when I tried testing my extension on my buddy's site, whiskeynotes.com. What I was noticing is that my backbone models were being mangled upon adding them to their respective collections. So something like this.collection.add(new SomeModel) created some nonsense version of my model. This code eventually runs into Backbone's prepareModel code _prepareModel: function(model, options) { options || (options = {}); if (!(model instanceof Model)) { var attrs = model; options.collection = this; model = new this.model(attrs, options); if (!model._validate(model.attributes, options)) model = false; } else if (!model.collection) { model.collection = this; } return model; }, Now, in most of the sites on which I've tested the extension, the result is normal, however on my buddy's site the !(model instance Model) evaluates to true even though it is actually an instance of the correct class. The consequence is a super messed up version of the model where the model's attributes is a reference to the models collection (strange right?). Needless to say, all kinds of crazy things were happening afterward. Why this is occurring is beyond me. However changing this line (!(model instanceof Model)) to (!(model instanceof Backbone.Model)) seems to fix the problem. I thought maybe it had something to do with the Flot library (jQuery graph library) creating their own version of 'Model' but looking through the source yielded no instances of it. I'm just curious as to why this would happen. And does it make sense to add this little change to the Backbone source? Update: I just realized that the "fix" doesn't actually work. I can also add that my backbone Models are namespaced in a wrapping object so that declaration looks something like class SomeNamespace.SomeModel extends Backbone.Model

    Read the article

  • Spring MVC, REST, and HATEOAS

    - by SingleShot
    I'm struggling with the correct way to implement Spring MVC 3.x RESTful services with HATEOAS. Consider the following constraints: I don't want my domain entities polluted with web/rest constructs. I don't want my controllers polluted with view constructs. I want to support multiple views. Currently I have a nicely put together MVC app without HATEOAS. Domain entities are pure POJOs without any view or web/rest concepts embedded. For example: class User { public String getName() {...} public String setName(String name) {...} ... } My controllers are also simple. They provide routing and status, and delegate to Spring's view resolution framework. Note my application supports JSON, XML, and HTML, yet no domain entities or controllers have embedded view information: @Controller @RequestMapping("/users") class UserController { @RequestMapping public ModelAndView getAllUsers() { List<User> users = userRepository.findAll(); return new ModelAndView("users/index", "users", users); } @RequestMapping("/{id}") public ModelAndView getUser(@PathVariable Long id) { User user = userRepository.findById(id); return new ModelAndView("users/show", "user", user); } } So, now my issue - I'm not sure of a clean way to support HATEOAS. Here's an example. Let's say when the client asks for a User in JSON format, it comes out like this: { firstName: "John", lastName: "Smith" } Let's also say that when I support HATEOAS, I want the JSON to contain a simple "self" link that the client can then use to refresh the object, delete it, or something else. It might also have a "friends" link indicating how to get the user's list of friends: { firstName: "John", lastName: "Smith", links: [ { rel: "self", ref: "http://myserver/users/1" }, { rel: "friends", ref: "http://myserver/users/1/friends" } ] } Somehow I want to attach links to my object. I feel the right place to do this is in the controller layer as the controllers all know the correct URLs. Additionally, since I support multiple views, I feel like the right thing to do is somehow decorate my domain entities in the controller before they are converted to JSON/XML/whatever in Spring's view resolution framework. One way to do this might be to wrap the POJO in question with a generic Resource class that contains a list of links. Some view tweaking would be required to crunch it into the format I want, but its doable. Unfortunately nested resources could not be wrapped in this way. Other things that come to mind include adding links to the ModelAndView, and then customizing each of Spring's out-of-the-box view resolvers to stuff links into the generated JSON/XML/etc. What I don't want is to be constantly hand-crafting JSON/XML/etc. to accommodate various links as they come and go during the course of development. Thoughts?

    Read the article

  • Clicking the TableView leads you to the another View

    - by lakesh
    I am a newbie to iPhone development. I have already created an UITableView. I have wired everything up and included the delegate and datasource. However, instead of adding a detail view accessory by using UITableViewCellAccessoryDetailClosureButton, I would like to click the UITableViewCell and it should lead to another view with more details about the UITableViewCell. My view controller looks like this: ViewController.h #import <UIKit/UIKit.h> @interface ViewController : UIViewController<UITableViewDataSource,UITableViewDelegate>{ NSArray *tableItems; NSArray *images; } @property (nonatomic,retain) NSArray *tableItems; @property (nonatomic,retain) NSArray *images; @end ViewController.m #import "ViewController.h" #import <QuartzCore/QuartzCore.h> @interface ViewController () @end @implementation ViewController @synthesize tableItems,images; - (void)viewDidLoad { [super viewDidLoad]; tableItems = [[NSArray alloc] initWithObjects:@"Item1",@"Item2",@"Item3",nil]; images = [[NSArray alloc] initWithObjects:[UIImage imageNamed:@"clock.png"],[UIImage imageNamed:@"eye.png"],[UIImage imageNamed:@"target.png"],nil]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{ return tableItems.count; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{ //Step 1:Check whether if we can reuse a cell UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"cell"]; //If there are no new cells to reuse,create a new one if(cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:(UITableViewCellStyleDefault) reuseIdentifier:@"cell"]; UIView *v = [[UIView alloc] init]; v.backgroundColor = [UIColor redColor]; cell.selectedBackgroundView = v; //changing the radius of the corners //cell.layer.cornerRadius = 10; } //Set the image in the row cell.imageView.image = [images objectAtIndex:indexPath.row]; //Step 3: Set the cell text content cell.textLabel.text = [tableItems objectAtIndex:indexPath.row]; //Step 4: Return the row return cell; } - (void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath{ cell.backgroundColor = [ UIColor greenColor]; } @end Need some guidance on this.. Thanks.. Please pardon me if this is a stupid question.

    Read the article

  • Optimize slow ranking query

    - by Juan Pablo Califano
    I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index

    Read the article

  • What block is not being tested in my test method? (VS08 Test Framework)

    - by daft
    I have the following code: private void SetControlNumbers() { string controlString = ""; int numberLength = PersonNummer.Length; switch (numberLength) { case (10) : controlString = PersonNummer.Substring(6, 4); break; case (11) : controlString = PersonNummer.Substring(7, 4); break; case (12) : controlString = PersonNummer.Substring(8, 4); break; case (13) : controlString = PersonNummer.Substring(9, 4); break; } ControlNumbers = Convert.ToInt32(controlString); } Which is tested using the following test methods: [TestMethod()] public void SetControlNumbers_Length10() { string pNummer = "9999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length11() { string pNummer = "999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length12() { string pNummer = "199999999999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } [TestMethod()] public void SetControlNumbers_Length13() { string pNummer = "1999999-9999"; Personnummer target = new Personnummer(pNummer); Assert.AreEqual(9999, target.ControlNumbers); } For some reason Visual Studio says that I have 1 block that is not tested despite showing all code in the method under test in blue (ie. the code is covered in my unit tests). Is this because of the fact that I don't have a default value defined in the switch? When the SetControlNumbers() method is called, the string on which it operates have already been validated and checked to see that it conforms to the specification and that the various Substring calls in the switch will generate a string containing 4 chars. I'm just curious as to why it says there is 1 untested block. I'm no unit test guru at all, so I'd love some feedback on this. Also, how can I improve on the conversion after the switch to make it safer other than adding a try-catch block and check for FormatExceptions and OverflowExceptions?

    Read the article

  • How do I Print a dynamically created Flex component/chart that is not being displayed on the screen?

    - by Adam Jones
    I have a several chart components that I have created in Flex. Basically I have set up a special UI that allows the user to select which of these charts they want to print. When they press the print button each of the selected charts is created dynamically then added to a container. Then I send this container off to FlexPrintJob. i.e. private function prePrint():void { var printSelection:Box = new Box(); printSelection.percentHeight = 100; printSelection.percentWidth = 100; printSelection.visible = true; if (this.chkMyChart1.selected) { var rptMyChart1:Chart1Panel = new Chart1Panel(); rptMyChart1.percentHeight = 100; rptMyChart1.percentWidth = 100; rptMyChart1.visible = true; printSelection.addChild(rptMyChart1); } print(printSelection); } private function print(container:Box):void { var job:FlexPrintJob; job = new FlexPrintJob(); if (job.start()) { job.addObject(container, FlexPrintJobScaleType.MATCH_WIDTH); job.send(); } } This code works fine if the chart is actually displayed somewhere on the page but adding it dynamically as shown above does not. The print dialog will appear but nothing happens when I press OK. So I really have two questions: Is it possible to print flex components/charts when they are not visible on the screen? If so, how do I do it / what am I doing wrong? UPDATE: Well, at least one thing wrong is my use of the percentages in the width and height. Using percentages doesn't really make sense when the Box is not contained in another object. Changing the height and width to fixed values actually allows the printing to progress and solves my initial problem. printSelection.height = 100; printSelection.width = 100; But a new problem arises in that instead of seeing my chart, I see a black box instead. I have previously resolved this issue by setting the background colour of the chart to #FFFFFF but this doesn't seem to be working this time. UPDATE 2: I have seen some examples on the adobe site that add the container to the application but don't include it in the layout. This looks like the way to go. i.e. printSelection.includeInLayout = false; addChild(printSelection);

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • ASP.Net / MySQL : Translating content into several languages

    - by philwilks
    I have an ASP.Net website which uses a MySQL database for the back end. The website is an English e-commerce system, and we are looking at the possibility of translating it into about five other languages (French, Spanish etc). We will be getting human translators to perform the translation - we've looked at automated services but these aren't good enough. The static text on the site (e.g. headings, buttons etc) can easily be served up in multiple languages via .Net's built in localization features (resx files etc). The thing that I'm not so sure about it how best to store and retrieve the multi-language content in the database. For example, there is a products table that includes these fields... productId (int) categoryId (int) title (varchar) summary (varchar) description (text) features (text) The title, summary, description and features text would need to be available in all the different languages. Here are the two options that I've come up with... Create additional field for each language For example we could have titleEn, titleFr, titleEs etc for all the languages, and repeat this for all text columns. We would then adapt our code to use the appropriate field depending on the language selected. This feels a bit hacky, and also would lead to some very large tables. Also, if we wanted to add additional languages in the future it would be time consuming to add even more columns. Use a lookup table We could create a new table with the following format... textId | languageId | content ------------------------------- 10 | EN | Car 10 | FR | Voiture 10 | ES | Coche 11 | EN | Bike 11 | FR | Vélo We'd then adapt our products table to reference the appropriate textId for the title, summary, description and features instead of having the text stored in the product table. This seems much more elegant, but I can't think of a simple way of getting this data out of the database and onto the page without using complex SQL statements. Of course adding new languages in the future would be very simple compared to the previous option. I'd be very grateful for any suggestions about the best way to achieve this! Is there any "best practice" guidance out there? Has anyone done this before?

    Read the article

  • MVC EditorFor bind to type in array

    - by BradBrening
    I have a ViewModel that, among other properties, contains an array of 'EmailAddress' objects. EmailAddress itself has properties such as "Address", and "IsPrimary". My view model breakdown is: public class UserDetailsViewModel { public BUser User { get; set; } public string[] Roles { get; set; } public EmailAddress[] EmailAddresses { get; set; } } I am showing a "User Details" page that is pretty standard. For example, I'm displaying data using @Html.DisplayFor(model => model.User.UserName) and @Html.DisplayFor(model => model.User.Comment) I also have a section on the page that lists all of the EmailAddress objects associated with the user: @if(Model.EmailAddresses.Length > 0) { foreach (var address in Model.EmailAddresses) { <div> @Html.DisplayFor(model => address.Address) </div> } } else { <div class="center">User does not have any email addresses.</div> } My problem is that I would like to show an "Add Email Address" form above the list of email addresses. I thought I would take the "normal" approach thusly: @using(Html.BeginForm(new { id=Model.User.UserName, action="AddUserEmailAddress" })) { <text>Address:</text> @Html.EditorFor(model => ** HERE I AM STUCK **) <input type="submit" value="Add Email" class="button" /> } As you may be able to tell, I am stuck here. I've tried model => Model.EmailAddresses[0] and model => Model.EmailAddresses.FirstOrDefault(). Both of these have failed horribly. I am sure that I am going about this all wrong. I've even thought of adding a "dummy" property to my ViewModel of type EmailAddress just so that I can bind to that in my EditorFor - but that seems to be a really bad hack. There has to be something simple that I'm overlooking! I would appreciate any help anyone can offer with this matter. Thank you in advance!

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • Is there a way to keep track of the ordering of items in a dictionary?

    - by Corpsekicker
    I have a Dictionary<Guid, ElementViewModel>. (ElementViewModel is our own complex type.) I add items to the dictionary with a stock standard items.Add(Guid.NewGuid, new ElementViewModel() { /*setters go here*/ });, At a later stage I remove some or all of these items. A simplistic view of my ElementViewModel is this: class ElementViewModel { Guid Id { get; set; } string Name { get; set; } int SequenceNo { get; set; } } It may be significant to mention that the SequenceNos are compacted within the collection after adding, in case other operations like moving and copying took place. {1, 5, 6} - {1, 2, 3} A simplistic view of my remove operation is: public void RemoveElementViewModel(IEnumerable<ElementViewModel> elementsToDelete) { foreach (var elementViewModel in elementsToDelete) items.Remove(elementViewModel.Id); CompactSequenceNumbers(); } I will illustrate the problem with an example: I add 3 items to the dictionary: var newGuid = Guid.NewGuid(); items.Add(newGuid, new MineLayoutElementViewModel { Id = newGuid, SequenceNo = 1, Name = "Element 1" }); newGuid = Guid.NewGuid(); items.Add(newGuid, new MineLayoutElementViewModel { Id = newGuid, SequenceNo = 2, Name = "Element 2" }); newGuid = Guid.NewGuid(); items.Add(newGuid, new MineLayoutElementViewModel { Id = newGuid, SequenceNo = 3, Name = "Element 3" }); I remove 2 items RemoveElementViewModel(new List<ElementViewModel> { item2, item3 }); //imagine I had them cached somewhere. Now I want to add 2 other items: newGuid = Guid.NewGuid(); items.Add(newGuid, new MineLayoutElementViewModel { Id = newGuid, SequenceNo = 2, Name = "Element 2, Part 2" }); newGuid = Guid.NewGuid(); items.Add(newGuid, new MineLayoutElementViewModel { Id = newGuid, SequenceNo = 3, Name = "Element 3, Part 2" }); On evaluation of the dictionary at this point, I expected the order of items to be "Element 1", "Element 2, Part 2", "Element 3, Part 2" but it is actually in the following order: "Element 1", "Element 3, Part 2", "Element 2, Part 2" I rely on the order of these items to be a certain way. Why is it not as expected and what can I do about it?

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >