Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 300/1579 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • NSTableView is not showing my data. Why is that happening?

    - by lampShade
    I'm making a "simple" to-do-list project and running into a few bumps in the road. The problem is that my NSTableView is NOT showing the data from the NSMutableArray "myArray" and I don't know why that is. Can someone point out my mistake? /* IBOutlet NSTextField *textField; IBOutlet NSTabView *tableView; IBOutlet NSButton *button; NSMutableArray *myArray; */ #import "AppController.h" @implementation AppController -(IBAction)addNewItem:(id)sender { NSString *myString = [textField stringValue]; NSLog(@"my stirng is %@",myString); myArray = [[NSMutableArray alloc] initWithCapacity:100]; [myArray addObject:myString]; } - (int)numberOfRowsInTableView:(NSTableView *)aTableView { return [myArray count]; } - (id)tableView:(NSTableView *)aTableView objectValueForTableColumn:(NSTableColumn *)aTableColumn row:(int)rowIndex { return [myArray objectAtIndex:rowIndex]; } -(id)init { [super init]; [tableView setDataSource:self]; [tableView setDelegate:self]; NSLog(@"init"); return self; } @end

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • VB .Net - Reflection: Reflected Method from a loaded Assembly executes before calling method. Why?

    - by pu.griffin
    When I am loading an Assembly dynamically, then calling a method from it, I appear to be getting the method from Assembly executing before the code in the method that is calling it. It does not appear to be executing in a Serial manner as I would expect. Can anyone shine some light on why this might be happening. Below is some code to illustrate what I am seeing, the code from the some.dll assembly calls a method named PerformLookup. For testing I put a similar MessageBox type output with "PerformLookup Time: " as the text. What I end up seeing is: First: "PerformLookup Time: 40:842" Second: "initIndex Time: 45:873" Imports System Imports System.Data Imports System.IO Imports Microsoft.VisualBasic.Strings Imports System.Reflection Public Class Class1 Public Function initIndex(indexTable as System.Collections.Hashtable) As System.Data.DataSet Dim writeCode As String MessageBox.Show("initIndex Time: " & Date.Now.Second.ToString() & ":" & Date.Now.Millisecond.ToString()) System.Threading.Thread.Sleep(5000) writeCode = RefreshList() End Function Public Function RefreshList() As String Dim asm As System.Reflection.Assembly Dim t As Type() Dim ty As Type Dim m As MethodInfo() Dim mm As MethodInfo Dim retString as String retString = "" Try asm = System.Reflection.Assembly.LoadFrom("C:\Program Files\some.dll") t = asm.GetTypes() ty = asm.GetType(t(28).FullName) 'known class location m = ty.GetMethods() mm = ty.GetMethod("PerformLookup") Dim o as Object o = Activator.CreateInstance(ty) Dim oo as Object() retString = mm.Invoke(o,Nothing).ToString() Catch Ex As Exception End Try return retString End Function End Class

    Read the article

  • Why does the same Getopt::Long code work differently in different programs?

    - by ennuikiller
    The following code works in one script yet in another only works if a specify the -- end of options flag before specifying an option: my $opt; GetOptions( 'help|h' => sub { usage("you want help?? hahaha, hopefully you're not serious!!"); }, 'file|f=s' => \$opt->{FILE}, 'report|r' => \$opt->{REPORT}, ) or usage("Bad Options"); In other words, the same code words in good.pl and bad.pl like so: good.pl -f bad.pl -- -f If I try bad.pl -f I get unknown option:f. Anyone have any clue as to what can cause this behavior? Thanks in advance! I've solved this..... and btw it's a VERY clear question (so why the downvotes)? I'll state it again: What would cause the identical GetOptions block to work in these 2 ways: "good.pl -f" "bad.pl -- -f" see how clear? Maybe you guys should think about it as if it were a TEST!

    Read the article

  • In .NET, Why Can I Access Private Members of a Class Instance within the Class?

    - by AMissico
    While cleaning some code today written by someone else, I changed the access modifier from Public to Private on a class variable/member/field. I expected a long list of compiler errors that I use to "refactor/rework/review" the code that used this variable. Imagine my surprise when I didn't get any errors. After reviewing, it turns out that another instance of the Class can access the private members of another instance declared within the Class. Totally unexcepted. Is this normal? I been coding in .NET since the beginning and never ran into this issue, nor read about it. I may have stumbled onto it before, but only "vaguely noticed" and move on. Can anyone explain this behavoir to me? I would like to know the "why" I can do this. Please explain, don't just tell me the rule. Am I doing something wrong? I found this behavior in both C# and VB.NET. The code seems to take advantage of the ability to access private variables. Sincerely, Totally Confused Class Jack Private _int As Integer End Class Class Foo Public Property Value() As Integer Get Return _int End Get Set(ByVal value As Integer) _int = value * 2 End Set End Property Private _int As Integer Private _foo As Foo Private _jack As Jack Private _fred As Fred Public Sub SetPrivate() _foo = New Foo _foo.Value = 4 'what you would expect to do because _int is private _foo._int = 3 'TOTALLY UNEXPECTED _jack = New Jack '_jack._int = 3 'expected compile error _fred = New Fred '_fred._int = 3 'expected compile error End Sub Private Class Fred Private _int As Integer End Class End Class

    Read the article

  • why i dont get the check nvalue in check box?

    - by udaya
    Hi I have a check box when check that check box the id corresponding to the check box is placed on a text box ... but when there is only single value in the database i cant get the check value why? here is my code <? if(isset($AcceptFriend)) {?> <form action="<?=site_url()?>friends/Accept_Friend" name="orderform" id="orderform" method="post" style="background:#CCCC99"> <input type="text" name="chId" id="chId" > <table border="0" height="50%" id="chkbox" width="50%" > <tr> <? foreach($AcceptFriend as $row) {?> <tr> <td>Name</td><td><?=$row['dFrindName'].'</br>';?></td> <td> <input type="checkbox" name="checkId" id="checkId" value="<? echo $row['dMemberId']; ?>" onClick="get_check_value()" ></td> </tr> <? }}?> </tr> <tr> <td width="10px"><input type="submit" name="submit" id="submit" class="buttn" value="AcceptFriend"></td></tr> </table> </form> This is the script i am using function get_check_value() { var c_value = ""; for (var i=0; i < document.orderform.checkId.length; i++) { if (document.orderform.checkId[i].checked) { c_value = c_value + document.orderform.checkId[i].value + "\n"; } } alert(c_value); document.getElementById('chId').value= c_value; }

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • SqlCe odd results why? -- Same SQL, different results in different apps. Issue with

    - by NitroxDM
    When I run this SQl in my mobile app I get zero rows. select * from inventory WHERE [ITEMNUM] LIKE 'PUMP%' AND [LOCATION] = 'GARAGE' When I run the same SQL in Query Analyzer 3.5 using the same database I get my expected one row. Why the difference? Here is the code I'm using in the mobile app: SqlCeCommand cmd = new SqlCeCommand(Query); cmd.Connection = new SqlCeConnection("Data Source="+filePath+";Persist Security Info=False;"); DataTable tmpTable = new DataTable(); cmd.Connection.Open(); SqlCeDataReader tmpRdr = cmd.ExecuteReader(); if (tmpRdr.Read()) tmpTable.Load(tmpRdr); tmpRdr.Close(); cmd.Connection.Close(); return tmpTable; UPDATE: For the sake of trying I used the code found in one of the answers found here and it works as expected. So my code looks like this: SqlCeConnection conn = new SqlCeConnection("Data Source=" + filePath + ";Persist Security Info=False;"); DataTable tmpTable = new DataTable(); SqlCeDataAdapter AD = new SqlCeDataAdapter(Query, conn); AD.Fill(tmpTable); The issue appears to be with the SqlCeDataReader. Hope this helps someone else out!

    Read the article

  • Why my test xml is failing with very simple XSD Schema?

    - by JSteve
    Hi all, I am a bit novice in xml schema. I would be grateful if somebody help me out to understand why my xml is not being validated with the schema: Here is my Schema: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/testSchema" xmlns="http://www.example.org/testSchema"> <xs:element name="Employee"> <xs:complexType> <xs:sequence> <xs:element name="Name"> <xs:complexType> <xs:sequence> <xs:element name="FirstName" /> <xs:element name="LastName" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Here is my test xml: <?xml version="1.0" encoding="UTF-8"?> <Employee xmlns="http://www.example.org/testSchema"> <Name> <FirstName>John</FirstName> <LastName>Smith</LastName> </Name> </Employee> I am getting following error by Eclipse xml editor/validator: cvc-complex-type.2.4.a: Invalid content was found starting with element 'Name'. One of '{Name}' is expected. I could not understand what is wrong with this schema or my xml.

    Read the article

  • Why does the Chrome spacing work in one JS file and not the other?

    - by Matrym
    If you highlight and copy the text in the first paragraph on this page, then paste it into a rich text editor (dreamweaver or gmail in rich text mode), you will see that some of the text is automagically linked. Basically, it works: http://seox.org/link-building-pro.html -- http://seox.org/lbp/old-pretty.js I'm trying to build a second version, but somewhere along the way I broke it. If you go along with the same process on this new url, spacing before and after the link are removed in Chrome: http://seox.org/test.html -- http://seox.org/lbp/lb-core.js Why does the spacing work correctly in the first one, but not in the second? More importantly, how do I fix the second one so that it doesn't bug out? I asked a variation of this question before, and got a helpful and interesting answer, but hopefully I've asked the question with full detail this time around. Thanks in advance for your time! Edit: I've added a bounty to this post, and would greatly appreciate precise instructions on how to fix the bug (rather than general suggestions. To better illustrate the bug, I've copied the gray box (from the second page) below. Note how the spacing is removed before and after the a tags: Link Building 2 is an amazing tool that helps your website visitors share your content, with proper attribution. It connects to email, social sharing sites, eCommerce sites, and is the<a href="http://seox.org/test.html#seo">SEO</a>'s best friend. Think of it as the sneeze in the viral marketing metaphor. <div> <p id="credit"><br /> Read more about<a href="http://seox.org/test.html">Text Citations</a>by<a href="http://seox.org">seox.org</a></p> </div>

    Read the article

  • Why can't the 'NonSerialized' attribute be used at the class level? How to prevent serialization of

    - by ck
    I have a data object that is deep-cloned using a binary serialization. This data object supports property changed events, for example, PriceChanged. Let's say I attached a handler to PriceChanged. When the code attempts to serialize PriceChanged, it throws an exception that the handler isn't marked as serializable. My alternatives: I can't easily remove all handlers from the event before serialization I don't want to mark the handler as serializable because I'd have to recursively mark all the handlers dependencies as well. I don't want to mark PriceChanged as NonSerialized - there are tens of events like this that could potentially have handlers. Ideally, I'd like .NET to just stop going down the object graph at that point and make that a 'leaf'. So why can't I just mark the handler class as 'NonSerialized'? -- I finally worked around this problem by making the handler implement ISerializable and doing nothing in the serialize constructor/ GetDataObject method. But, the handler still is serialized, just with all its dependencies set to null - so I had to account for that as well. Is there a better way to prevent serialization of an entire class?

    Read the article

  • Why should the "prime-based" hashcode implmentation be used instead of the "naive" one?

    - by Wilhelm
    I have seen that a prime number implmentation of the GetHashCode function is being recommend, for example here. However using the following code (in VB, sorry), it seems as if that implementation gives the same hash density as a "naive" xor implementation. If the density is the same, I would suppose there is the same probability of cllision in both implementations. Am I missing anything on why is the prime approach preferred? I am supossing that if the hash code is a byte I do not lose generality for the integer case. Sub Main() Dim XorHashes(255) As Integer Dim PrimeHashes(255) As Integer For i = 0 To 255 For j = 0 To 255 For k = 0 To 255 XorHashes(GetXorHash(i, j, k)) += 1 PrimeHashes(GetPrimeHash(i, j, k)) += 1 Next Next Next For i = 0 To 255 Console.WriteLine("{0}: {1}, {2}", i, XorHashes(i), PrimeHashes(i)) Next Console.ReadKey() End Sub Public Function GetXorHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Return CByte((valueOne Xor valueTwo Xor valueThree) Mod 256) End Function Public Function GetPrimeHash(ByVal valueOne As Integer, ByVal valueTwo As Integer, ByVal valueThree As Integer) As Byte Dim TempHash = 17 TempHash = 31 * TempHash + valueOne TempHash = 31 * TempHash + valueTwo TempHash = 31 * TempHash + valueThree Return CByte(TempHash Mod 256) End Function

    Read the article

  • Why function Ellipse(...) are needed twice here to draw an ellipse?

    - by John Son
    MFC: I read this code which is to draw an ellipse (not solid interior), but I cannot understand why function "pDC-Ellipse(...)" is needed twice here? CDC *pDC=GetDC(); CPen pen; CBrush brush; getpen(pen,pDC,col,bol); if(do_what>=DRAW_LINE&&do_what<=DRAW_RRECT){ p->p[0]=start; p->p[1]=end; if(sol==1){ getbrush(brush,pDC,col); } if(do_what==DRAW_LINE){ pDC->MoveTo(start); pDC->LineTo(end); } else if(do_what==DRAW_ELLIPSE||do_what==DRAW_CIRCLE){ pDC->SetROP2(R2_NOT); assist=start; if(do_what==DRAW_CIRCLE){ assist.y=end.y-end.x+start.x; } pDC->Ellipse(start.x,assist.y,end.x,end.y); pDC->SetROP2(R2_COPYPEN); if(sol==0){ pDC->SelectStockObject(NULL_BRUSH); } if(do_what==DRAW_CIRCLE){ assist.y=point.y-point.x+start.x; } pDC->Ellipse(start.x,assist.y,point.x,point.y); end=point; } } If I remove the first one, the ellipse will be black solid inside. If I remove the second one, the ellipse will never be drawn but disappears when left mouse button is up. the dialog: when moving mouse: mouse moving when mouse button pops: mouse button pops Besides, what color of CBrush is if I use "CBrush brush; pDC-Ellipse(start.x,assist.y,end.x,end.y);"

    Read the article

  • Why won't VS2010 RC use my existing types when I add a service reference?

    - by Johan Driessen
    I have a huge problem getting services references in VS2010 RC to use existing assemblies. Even though I have a class library with all the data contracts (classes marked with DataContract and properties with DataMember) that is shared between the service project and the consuming project (which is a class library), when I add a service reference, the data contracts are regenerated withing the service reference instead of using the existing types. When I was using VS2010 beta 2, this worked fine, and I have existing service references using the very same data contracts. But if I add a new service reference, or even update an old one, it won't use the existing types anymore. I have made a mini-test-solution, with one service, one data contract type and one console app as a consumer (all in the same solution), and there it seems to work, but that's no great comfort to me. Is there any way to see why it can't use the existing types? Edit to clearify. It works to generate the proxy classes with svcutil.exe, and point to the data contracts dll, like this: svcutil.exe http://localhost/MyService.svc /reference:[Path To DataContracts]\DataContracts.dll /n:*,MyProject.MyServiceReference /ct:System.Collections.Generic.List`1 The question is, what possible reason could there be for Visual Studio to generate its own datacontracts instead of using the existing ones even though the "reuse" checkbox is checked and the datacontracts assembly is referenced.

    Read the article

  • Why is a CoreData forceFetch required after a delete on the iPad but not the iPhone?

    - by alyoshak
    When the following code is run on the iPhone the count of fetched objects after the delete is one less than before the delete. But on the iPad the count remains the same. This inconsistency was causing a crash on the iPad because elsewhere in the code, soon after the delete, fetchedObjects is called and the calling code, trusting the count, attempts access to the just-deleted object's properties, resulting in a NSObjectInaccessibleException error (see below). A fix has been to use that commented-out call to performFetch, which when executed makes the second call to fetchObjects yield the same result as on the iPhone without it. My question is: Why is the iPad producing different results than the iPhone? This is the second of these differences that I've discovered and posted recently. -(NSError*)deleteObject:(NSManagedObject*)mo; { NSLog(@"\n\nNum objects in store before delete: %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); [self.managedObjectContext deleteObject:mo]; // Save the context. NSError *error = nil; if (![self.managedObjectContext save:&error]) { } // [self.fetchedResultsController performFetch:&error]; // force a fetch NSLog(@"\n\nNum objects in store after delete (and save): %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); return error; } (The full NSObjectInaccessibleException is: "Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x1dcf90 <x-coredata://DC02B10D-555A-4AB8-8BC4-F419C4982794/Blueprint/p"

    Read the article

  • Why do Firefox and Opera ignore max-width inside of display: table-cell?

    - by brad
    The following code displays correctly in Chrome or IE (the image is 200px wide). In Firefox and Opera the max-width style is ignored completely. Why does this happen and is there a good work around? Also, which way is most standards compliant? Note One possible work around for this particular situation is to set max-width to 200px. However, this is a rather contrived example. I'm looking for a strategy for a variable width container. <!doctype html> <html> <head> <style> div { display: table-cell; padding: 15px; width: 200px; } div img { max-width: 100%; } </style> </head> <body> <div> <img src="http://farm4.static.flickr.com/3352/4644534211_b9c887b979.jpg" /> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec facilisis ante, facilisis posuere ligula feugiat ut. Fusce hendrerit vehicula congue. at ligula dolor. Lorem ipsum dolor sit amet, consectetur adipiscing elit. leo metus, aliquam eget convallis eget, molestie at massa. </p> </div> </body> </html> [Update] As stated by mVChr below, the w3.org spec states that max-width does not apply to inline elements. I've tried using div img { max-width: 100%; display: block; }, but it does not seem to correct the issue.

    Read the article

  • Why is this SocketException not caught by a generic catch routine?

    - by Tarnschaf
    Our company provides a network component (DLL) for a GUI application. It uses a Timer that checks for disconnections. If it wants to reconnect, it calls: internal void timClock_TimerCallback(object state) { lock (someLock) { // ... try { DoConnect(); } catch (Exception e) { // Log e.Message omitted // Raise event with e as parameter ErrorEvent(this, new ErrorEventArgs(e)); DoDisconnect(); } // ... } } So the problem is, inside of the DoConnect() routine a SocketException is thrown (and not caught). I would assume, that the catch (Exception e) should catch ALL exceptions but somehow the SocketException was not caught and shows up to the GUI application. protected void DoConnect() { // client = new TcpClient(); client.NoDelay = true; // In the following call the SocketException is thrown client.Connect(endPoint.Address.ToString(), endPoint.Port); // ... (login stuff) } The doc confirmed that SocketException extends Exception. The stacktrace that showed up is: TcpClient.Connect() -> DoConnect() -> timClock_TimerCallback So the exception is not thrown outside the try/catch block. Any ideas why it doesn't work?

    Read the article

  • Why do programmers have to learn for their whole lives and aren't you afraid of that?

    - by serg555
    Programming technologies are evolving so fast that programmers constantly have to learn more and more to catch up whether you want it or not. Often it is not just learning more in the same direction but starting from a scratch. Lets say you were a topnotch programmer in 1999 who quit for 10 years and went to a job interview in 2009 (funny even to imagine) - how much of your knowledge is still needed? And if we take a carpenter, engineer, doctor or even mathematician - they all are still good specialists after 10 years. So why programming is so not stable? Is it because it is just relatively new or because something important is still missing after 50 years and we can't find it to settle in that direction? Do you think after some time situation will change? Learning something new all the time is exciting and all, but it is starting to worry me that as I become older it will be harder and harder. After all "you can't teach an old dog new tricks" and I'm afraid that at the end I just end up behind college students and become one of those "cobol dinosaurs", only it will be probably "java dinosaurs" by that time.

    Read the article

  • i386 assembly question: why do I need to meddle with the stack pointer?

    - by zneak
    Hello everyone, I decided it would be fun to learn x86 assembly during the summer break. So I started with a very simple hello world program, borrowing on free examples gcc -S could give me. I ended up with this: HELLO: .ascii "Hello, world!\12\0" .text .globl _main _main: pushl %ebp # 1. puts the base stack address on the stack movl %esp, %ebp # 2. puts the base stack address in the stack address register subl $20, %esp # 3. ??? pushl $HELLO # 4. push HELLO's address on the stack call _puts # 5. call puts xorl %eax, %eax # 6. zero %eax, probably not necessary since we didn't do anything with it leave # 7. clean up ret # 8. return # PROFIT! It compiles and even works! And I think I understand most of it. Though, magic happens at step 3. Would I remove this line, my program would die between the call to puts and the xor from a misaligned stack error. And would I change $20 to another value, it'd crash too. So I came to the conclusion that this value is very important. Problem is, I don't know what it does and why it's needed. Can anyone explain me? (I'm on Mac OS, would it ever matter.)

    Read the article

  • Why setPreferredSize does not change the size of the button?

    - by Roman
    Here is the code: import javax.swing.*; import java.awt.event.*; import java.awt.*; public class TestGrid { public static void main(String[] args) { JFrame frame = new JFrame("Colored Trails"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel mainPanel = new JPanel(); mainPanel.setLayout(new BoxLayout(mainPanel, BoxLayout.Y_AXIS)); JPanel panel = new JPanel(); panel.setLayout(new GridLayout(4, 9)); panel.setMaximumSize(new Dimension(9*30-20,4*30)); JButton btn; for (int i=1; i<=4; i++) { for (int j=1; j<=4; j++) { btn = new JButton(); btn.setPreferredSize(new Dimension(30, 30)); panel.add(btn); } btn = new JButton(); btn.setPreferredSize(new Dimension(30, 10)); panel.add(btn); for (int j=1; j<=4; j++) { btn = new JButton(); btn.setPreferredSize(new Dimension(30, 30)); panel.add(btn); } } mainPanel.add(panel); frame.add(mainPanel); frame.setSize(450,950); frame.setVisible(true); } } I suppose to have a table of buttons with 4 rows and 9 columns. And the middle column should be narrower that other columns. I tried Dimension(30, 10) and Dimension(30, 10) both have no effect on the width of the middle column. Why?

    Read the article

  • Why are changes to coffeescript files not being compiled when my Rails 3.2.0 app is in development mode?

    - by ben
    Normally, any changes I make to .js.coffee files in my Rails 3.2.0 app in development mode take effect when I refresh the page. All of a sudden, this is not happening. If I do rake assets:precompile, then the changes are shown, but then if I do rake assets:clean they go back to not being shown. What is causing this? Edit: Restarting the server makes the changes show. Why isn't this happening automatically as before? Edit: Here is my development.rb Myapp::Application.configure do # Settings specified here will take precedence over those in config/application.rb # In the development environment your application's code is reloaded on # every request. This slows down response time but is perfect for development # since you don't have to restart the web server when you make code changes. config.cache_classes = false # Log error messages when you accidentally call methods on nil. config.whiny_nils = true # Show full error reports and disable caching config.consider_all_requests_local = true config.action_controller.perform_caching = false # Don't care if the mailer can't send config.action_mailer.raise_delivery_errors = false # Print deprecation notices to the Rails logger config.active_support.deprecation = :log # Only use best-standards-support built into browsers config.action_dispatch.best_standards_support = :builtin # Raise exception on mass assignment protection for Active Record models config.active_record.mass_assignment_sanitizer = :strict # Log the query plan for queries taking more than this (works # with SQLite, MySQL, and PostgreSQL) config.active_record.auto_explain_threshold_in_seconds = 0.5 # Do not compress assets config.assets.compress = false # Expands the lines which load the assets config.assets.debug = true config.action_mailer.default_url_options = { :host => 'localhost:3000' } config.log_level = :warn end

    Read the article

  • Why would a variable in Scala code mysteriously become null?

    - by Alex R
    I've isolated the problem down to this: Predef.println("the value of argv1 here is " + argv(1)); var n: $ = undef; n = argv(1); Predef.println("the value of argv1 here is " + argv(1)); Predef.println("the value of n here is " + n); Predef.println("the class of n here is " + n.getClass); Here's the definition of $: class $ { println("constructed a new $ of type: " + this.getClass); def value: $ = this; def toValue: Value = { new ConstStringValue(this.toString()) }; def -(sym: Symbol): $ = { println("looked up: " + sym); this } def -(sym: $): $ = { println("looked up: " + sym); this } def update(sym: Symbol, any: Any) { println("update called: " + sym + "=" + any); } def apply(sym: Symbol) = { this } def apply(obj: $) = { this } def apply() = { this } def +(o:$) = this.toValue.div(o.toValue) def *(o:$) = this.toValue.mul(o.toValue) def >(o:$) = this.toValue.gt(o.toValue) def <(o:$) = this.toValue.lt(o.toValue) def ++() = { this } def -=(o:$) = { this } } When run, the code prints: the value of argv1 here is 10 the value of argv1 here is 10 the value of n here is null java.lang.NullPointerException at test_1_php$.include(_tmp.scala:149) at php.script.main(php.scala:57) at test_1_php.main(_tmp.scala) [...] Why would n mysteriously lose its value (or fail to take one on)?

    Read the article

  • Python - Why use anything other than uuid4() for unique strings?

    - by orokusaki
    I see quit a few implementations of unique string generation for things like uploaded image names, session IDs, et al, and many of them employ the usage of hashes like SHA1, or others. I'm not questioning the legitimacy of using custom methods like this, but rather just the reason. If I want a unique string, I just say this: >>> import uuid >>> uuid.uuid4() 07033084-5cfd-4812-90a4-e4d24ffb6e3d And I'm done with it. I wasn't very trusting before I read up on uuid, so I did this: >>> import uuid >>> s = set() >>> for i in range(5000000): # That's 5 million! >>> s.add(uuid.uuid4()) ... ... >>> len(s) 5000000 Not one repeater (I didn't expect one considering the odds are like 1.108e+50, but it's comforting to see it in action). You could even half the odds by just making your string by combining 2 uuid4()s. So, with that said, why do people spend time on random() and other stuff for unique strings, etc? Is there an important security issue or other regarding uuid?

    Read the article

  • Why do i get E_ACCESSDENIED when reading public shortcuts through Shell32?

    - by corvuscorax
    I'm trying to read the targets of all desktop shortcuts in a C# 4 application. The shortcuts on a windows desktop can come from more that one location, depending on whether the shortcut is created for all users or just the current user. In this specific case I'm trying to read a shortcut from the public desktop, e.g. from C:\Users\Public\Desktop\shortcut.lnk. The code is like this (path is a string contaning the path to the lnk file): var shell = new Shell32.ShellClass(); var folder = shell.NameSpace(Path.GetDirectoryName(path)); var folderItem = folder.ParseName(Path.GetFileName(path)); if (folderItem != null) { var link = (Shell32.ShellLinkObject)folderItem.GetLink; The last line throws an System.UnauthorizedAccessException, indicating that it's not allowed to read the shortcut file's contents. I have tried on shortcut files on the user's private desktop (c:\Users\username\Desktop) and that works fine. So, my questions are: (1) why is my application not allowed to /read/ the shortcut from code, when I can clearly read the contents as a user? (2) is there a way to get around this? Maybe using a special manifest file for the application? And, by the way, my OS is Windows 7, 64-bit. be well -h-

    Read the article

  • Why does the Java Collections Framework offer two different ways to sort?

    - by dvanaria
    If I have a list of elements I would like to sort, Java offers two ways to go about this. For example, lets say I have a list of Movie objects and I’d like to sort them by title. One way I could do this is by calling the one-argument version of the static java.util.Collections.sort( ) method with my movie list as the single argument. So I would call Collections.sort(myMovieList). In order for this to work, the Movie class would have to be declared to implement the java.lang.Comparable interface, and the required method compareTo( ) would have to be implemented inside this class. Another way to sort is by calling the two-argument version of the static java.util.Collections.sort( ) method with the movie list and a java.util.Comparator object as it’s arguments. I would call Collections.sort(myMovieList, titleComparator). In this case, the Movie class wouldn’t implement the Comparable interface. Instead, inside the main class that builds and maintains the movie list itself, I would create an inner class that implements the java.util.Comparator interface, and implement the one required method compare( ). Then I'd create an instance of this class and call the two-argument version of sort( ). The benefit of this second method is you can create an unlimited number of these inner class Comparators, so you can sort a list of objects in different ways. In the example above, you could have another Comparator to sort by the year a movie was made, for example. My question is, why bother to learn both ways to sort in Java, when the two-argument version of Collections.sort( ) does everything the first one-argument version does, but with the added benefit of being able to sort the list’s elements based on several different criteria? It would be one less thing to have to keep in your mind while coding. You’d have one basic mechanism of sorting lists in Java to know.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >