Search Results

Search found 21960 results on 879 pages for 'program termination'.

Page 772/879 | < Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >

  • Sizing issues while adding a .Net UserControl to a TabPage

    - by TJ_Fischer
    I have a complex Windows Forms GUI program that has a lot of automated control generation and manipulation. One thing that I need to be able to do is add a custom UserControl to a newly instatiated TabPage. However, when my code does this I get automatic resizing events that cause the formatting to get ugly. Without detailing all of the different Containers that could possibly be involved, the basic issue is this: At a certain point in the code I create a new tab page: TabPage tempTabPage = new TabPage("A New Tab Page"); Then I set it to a certain size that I want it to maintain: tempTabPage.Width = 1008; tempTabPage.Height = 621; Then I add it to a TabControl: tabControl.TabPages.Add(tempTabPage); Then I create a user control that I want to appear in the newly added TabPage: CustomView customView = new CustomView("A new custom control"); Here is where the problem comes in. At this point both the tempTabPage and the customView are the same size with no padding or margin and they are the size I want them to be. I now try to add this new custom UserControl to the tab page like this: tempTabPage.Controls.Add(customView); When making this call the customView and it's children controls get resized to be larger and so parts of the customView are hidden. Can anyone give me any direction on what to look for or what could be causing this kind of issue? Thanks ahead of time.

    Read the article

  • How can I help fellow students struggling in programming classes?

    - by David Barry
    I'm a computer science student finishing up my second semester of programming classes. I've enjoyed them quite a bit, and learned a lot, but it seems other students are struggling with the concepts and assignments more than I am. When an assignment is due, the inevitable group email comes out the day or two before with people needing some help either with a specific part of the problem, or sometimes people just seem to have a hard time knowing where to start. I'd really like to be able to help out, but I have a hard time thinking of the right way to give them help without giving them the answer. When I'm having trouble understanding a concept, a code snippet can go along way to helping me, but at the same time if it makes a lot of sense, it can be difficult to think of another way to go about it. Plus the Academic Integrity section of each assignment is always looming overhead warning against sharing code with others. I've tried using pseudo code to help give others an idea on program flow, leaving them to figure out how to implement certain aspects of it, but I didn't get too much feedback and don't know how much it actually helped them out, or if it just confused them further. So I'm basically looking to see if anyone has experience with this, or good ways that I can help out other students to nudge them in the right direction or help them think about the problem in the right way.

    Read the article

  • Monotouch or Titanium for rapid application development on IPhone?

    - by Ronnie
    As a .Net developer I always dreamed for the possibility to develop with my existing skills (c#) applications for the Iphone. Both programs require a Mac and the Iphone Sdk installed. Appcelerator Titanium was the first app I tried and it is based on exposing some Iphone native api to javascript so that they can be called using that language. Monotouch starts at $399 for beeing able to deploy on the Iphone and not on the Iphone simulator while Titanium is free. Monotouch (Monodevelop) has an Ide that is currently missing in Titanium (but you can use any editor like Textmate, Aptana...) I think both program generate at the end a native precompiled app (also if I am not sure about the size of the final app on the Iphone as I think the .Net framework calls are prelilnked at compilation time in Monotouch). I am also not sure about the full coverage of all the Iphone api and features. Titanium has also the advantage to enable Android app development but as a c# developer I still find Monotouch experience more like the Visual Studio one. Which one would you choose and what are your experiences on Monotouch and Titanium?

    Read the article

  • Choosing design method for ladder-like word game.

    - by owca
    I'm trying to build a simple application, with the finished program looking like this : I will also have to implement two different GUI layouts for this. Now I'm trying to figure out the best method to perform this task. My professor told me to introduce Element class with 4 states : - empty - invisible (used in GridLayout) - first letter - other letter I've thought about following solutions (by List I mean any sort of Collection) : 1. Element is a single letter, and each line is Element[]. Game class will be array of arrays Element[]. I guess that's the dumbest way, and the validation might be troublesome. 2. Like previously but Line is a List of Element. Game is an array of Lines. 3. Like previously but Game is a List of Lines. Which one should I choose ? Or maybe do you have better ideas ? What collection would be best if to use one ?

    Read the article

  • correct way of initializing variables

    - by OVERTONE
    ok this is just a shot in the dark but it may be the cause of most of the errors ive gotten. when your initializing something. lets say a smal swing program. would it go liek this variables here { private Jlist contactList; String [] contactArray; ArrayList <String> contactArrayList; ResultSet namesList constructor here public whatever() { GridLayout aGrid = new GridLayout(2,2,10,10); contact1 = new String(); contact2 = new String(); contact3 = new String(); contactArrayList = new ArrayList<String>(); // is something supposed too go in the () of this JList? contactList = new JList(); contactArray = new String[5]; from1 =new JLabel ("From: " + contactArray[1]); gridlayout.add(components)// theres too many components to write onto SO. } // methods here public void fillContactsGui() { createConnection(); ArrayList<String> contactsArrayList = new ArrayList<String>(); while (namesList.next()) { contactArrayList.add(namesList.getString(1)); ContactArray[1] = namesList[1]; } } i know this is probably a huge beginner question but this is the code ive gotten used too. im initializing thigns three and fours times without meaning too because im not sure where they gp. can anyone shed some light on this? p.s. sorry for the messy sample code. i done my best.

    Read the article

  • iPhone shooter game bullet physics!

    - by user298261
    Hello, Making a new shooter game here in the vein of "Galaga" (my fav shooter game growing up). Here's the code I have for bullet physics: -(IBAction)shootBullet:(id)sender{ imgBullet.hidden = NO; timer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(fireBullet) userInfo:Nil repeats:YES]; } -(void)fireBullet{ imgBullet.center = CGPointMake(imgBullet.center.x + bulletVelocity.x , imgBullet.center.y + bulletVelocity.y); if(imgBullet.center.y <= 0){ imgBullet.hidden = YES; imgBullet.center = self.view.center; [timer invalidate]; } } Anyway, the obvious issue is that once the bullet leaves the screen, its center is being reset, so I'm reusing the same bullet for each press of the "fire" button. Ideally, I would like the user to be able to spam the "fire" button without causing the program to crash. How would I tinker this existing code so that a bullet object would spawn on the button press each time, and then despawn after it exits the screen, or collides with an enemy? Thank you for any assistance you can offer!

    Read the article

  • returning opengl display callback in D

    - by Max
    I've written a simple hello world opengl program in D, using the converted gl headers here. My code so far: import std.string; import c.gl.glut; Display_callback display() { return Display_callback // line 7 { return; // just display a blank window }; } // line 10 void main(string[] args) { glutInit(args.length, args); glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE); glutInitWindowSize(800,600); glutCreateWindow("Hello World"); glutDisplayFunc(display); glutMainLoop(); } My problem is with the display() function. glutDisplayFunc() expects a function that returns a Display_callback, which is typedef'd as typedef GLvoid function() Display_callback;. When I try to compile, dmd says line 7: found '{' when expecting ';' following return statement line 10: unrecognized declaration How do I properly return the Display_callback here? Also, how do I change D strings and string literals into char*? My calls to glutInit and glutCreateWindow don't like the D strings they're getting. Thanks for your help.

    Read the article

  • Cannot inherit from generic base class and specific interface using same type with generic constrain

    - by simendsjo
    Sorry about the strange title. I really have no idea how to express it any better... I get an error on the following snippet. I use the class Dummy everywhere. Doesn't the compiler understand the constraint I've added on DummyImplBase? Is this a compiler bug as it works if I use Dummy directly instead of setting it as a constraint? Error 1 'ConsoleApplication53.DummyImplBase' does not implement interface member 'ConsoleApplication53.IRequired.RequiredMethod()'. 'ConsoleApplication53.RequiredBase.RequiredMethod()' cannot implement 'ConsoleApplication53.IRequired.RequiredMethod()' because it does not have the matching return type of 'ConsoleApplication53.Dummy'. C:\Documents and Settings\simen\My Documents\Visual Studio 2008\Projects\ConsoleApplication53\ConsoleApplication53\Program.cs 37 27 ConsoleApplication53 public class Dummy { } public interface IRequired<T> { T RequiredMethod(); } public interface IDummyRequired : IRequired<Dummy> { void OtherMethod(); } public class RequiredBase<T> : IRequired<T> { public T RequiredMethod() { return default(T); } } public abstract class DummyImplBase<T> : RequiredBase<T>, IDummyRequired where T: Dummy { public void OtherMethod() { } }

    Read the article

  • Can Win32_NetworkAdapterConfiguration.EnableStatic() be used to set more than one IP address?

    - by Andrew J. Brehm
    I ran into this problem in a Visual Basic program that uses WMI but could confirm it in PowerShell. Apparently the EnableStatic() method can only be used to set one IP address, despite taking two parameters IP address(es) and subnetmask(s) that are arrays. I.e. $a=get-wmiobject win32_networkadapterconfiguration -computername myserver This gets me an array of all network adapters on "myserver". After selecting a specific one ($a=$a[14] in this case), I can run $a.EnableStatic() which has this signature System.Management.ManagementBaseObject EnableStatic(System.String[] IPAddress, System.String[] SubnetMask) I thought this implies that I could set several IP addresses like this: $ips="192.168.1.42","192.168.1.43" $a.EnableStatic($ips,"255.255.255.0") But this call fails. However, this call works: $a.EnableStatic($ips[0],"255.255.255.0") It looks to me as if EnableStatic() really takes two strings rather than two arrays of strings as parameters. In Visual Basic it's more complicated and arrays must be passed but the method appears to take into account only the first element of each array. Am I confused again or is there some logic here?

    Read the article

  • invalid scalar hex value 0x8000000 and over

    - by kioto
    Hi. I found a problem getting hex value from yaml file. It couldn't get hex value 0x80000000 and over. Following is a sample C++ program. // ymlparser.cpp #include <iostream> #include <fstream> #include "yaml-cpp/yaml.h" int main(void) { try { std::ifstream fin("hex.yaml"); YAML::Parser parser(fin); YAML::Node doc; parser.GetNextDocument(doc); int num1; doc["hex1"] >> num1; printf("num1 = 0x%x\n", num1); int num2; doc["hex2"] >> num2; printf("num2 = 0x%x\n", num2); return 0; } catch(YAML::ParserException& e) { std::cout << e.what() << "\n"; } } hex.yaml hex1: 0x7FFFFFFF hex2: 0x80000000 Error message is here. $ ./ymlparser num1 = 0x7fffffff terminate called after throwing an instance of 'YAML::InvalidScalar' what(): yaml-cpp: error at line 2, column 7: invalid scalar Aborted Environment yaml-cpp : getting from svn, March.22.2010 or v0.2.5 OS : Ubuntu 9.10 i386 I need to get hex the value on yaml-cpp now, but I have no idea. Please tell me how to get it another way. Thanks,

    Read the article

  • base destructor called twice after derived object?

    - by sil3nt
    hey there, why is the base destructor called twice at the end of this program? #include <iostream> using namespace std; class B{ public: B(){ cout << "BC" << endl; x = 0; } virtual ~B(){ cout << "BD" << endl; } void f(){ cout << "BF" << endl; } virtual void g(){ cout << "BG" << endl; } private: int x; }; class D: public B{ public: D(){ cout << "dc" << endl; y = 0; } virtual ~D(){ cout << "dd" << endl; } void f(){ cout << "df" << endl; } virtual void g(){ cout << "dg" << endl; } private: int y; }; int main(){ B b, * bp = &b; D d, * dp = &d; bp->f(); bp->g(); bp = dp; bp->f(); bp->g(); }

    Read the article

  • Programming Environment for a Motorola 68000 in Linux

    - by Nick Presta
    Greetings all, I am taking a Structure and Application of Microcomputers course this semester and we're programming with the Motorola 68000 series CPU/board. The course syllabus suggests running something like Easy68K or Teesside Motorola 68000 Assembler/Emulator at home to test our programs. I told my prof I run x64 Linux and asked what sort of environment I would need to complete my coursework. He said that the easiest environment to use is a Windows XP 32bit VM with one of the two suggested applications installed, however, he doesn't really care what I use as long as I can test what I write at home. So I'm asking if there exists some sort of emulator or environment for Linux so I can test my code, and what sort of caveats I will run into by writing and testing my code in Linux. Also, I plan to do my editing in Vim, which probably isn't a problem, but I would like any insight into editors for 68000 assembly, if you have any. Thanks! EDIT: Just to clarify - I don't want to install Linux on the board at all - I want to program on my home machine, test the code locally, and then bring it onto the board for grading/running.

    Read the article

  • Why is PLINQ slower than LINQ for this code?

    - by Rob Packwood
    First off, I am running this on a dual core 2.66Ghz processor machine. I am not sure if I have the .AsParallel() call in the correct spot. I tried it directly on the range variable too and that was still slower. I don't understand why... Here are my results: Process non-parallel 1000 took 146 milliseconds Process parallel 1000 took 156 milliseconds Process non-parallel 5000 took 5187 milliseconds Process parallel 5000 took 5300 milliseconds using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace DemoConsoleApp { internal class Program { private static void Main() { ReportOnTimedProcess( () => GetIntegerCombinations(), "non-parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(runAsParallel: true), "parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000), "non-parallel 5000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000, true), "parallel 5000"); Console.Read(); } private static List<Tuple<int, int>> GetIntegerCombinations( int iterationCount = 1000, bool runAsParallel = false) { IEnumerable<int> range = Enumerable.Range(1, iterationCount); IEnumerable<Tuple<int, int>> integerCombinations = from x in range from y in range select new Tuple<int, int>(x, y); return runAsParallel ? integerCombinations.AsParallel().ToList() : integerCombinations.ToList(); } private static void ReportOnTimedProcess( Action process, string processName) { var stopwatch = new Stopwatch(); stopwatch.Start(); process(); stopwatch.Stop(); Console.WriteLine("Process {0} took {1} milliseconds", processName, stopwatch.ElapsedMilliseconds); } } }

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • Problem when use compiled .A with simulator mode

    - by Paska
    Hi all, i have a lib .a that run only in device mode. My sdk vers is 4.2 with xcode 3.2.x. In simulator, compile correctle with no warning and no errors, but in run mode (simulator) it crash with this error: Detected an attempt to call a symbol in system libraries that is not present on the iPhone: strtod$UNIX2003 called from function pj_init in image MyAPP. If you are encountering this problem running a simulator binary within gdb, make sure you 'set start-with-shell off' first Program received signal: “SIGABRT”. I try to clean, rebuild and set "set start-with-shell off" from terminal in this way: cd ~ echo '' >> .gdbinit echo 'set start-with-shell 0' >> .gdbinit Restarted all, but nothing. The problem don't wont to resolve! Is there any tag or property that i forgotted to set in the options? In other linker flag is there only "-ObjC". It's very important to solve this issue... any idea please? thanks, A EDIT: It's my lib, compiled in simulator mode! EDIT: It run only with Simulator 4.1. Don't work with iphone 4.0, 4.2 and ipad 3.2 (all simulator).

    Read the article

  • Release edit in QTableWidget Cell

    - by Schomin
    Basically I am trying to give the enter key the same functionality as the return key has when editing a cell in a qtablewidget. If editing a cell and enter is pressed I want it to jump out of editing that cell just like return does. It feels like I've literally tried everything. I've even tried passing a return press event to qcoreapplication. It seems like if your editing a cell and you press a key to trigger an action it wont happen. That seems to be what the problem is and I'm not sure how to get around that. I've been setting up all of my keyboard shortcuts for this program as actions because it seems easier to set up. Is there another way to do this that would allow the key event to happen when editing a cell? Can anyone help out with this? Thank you in advance. Tried this. It didn't work for me. http://stackoverflow.com/questions/518447/how-can-i-tell-a-qtablewidget-to-end-editing-a-cell

    Read the article

  • Why Does My Vector<PEVENTLOGRECORD> Mysteriously Get Cleared?

    - by Eric
    Hello everyone, I am making a program that reads and stores data from Windows EventLog files (.evt) in C++. I am using the calls OpenBackupEventLog(ServerName, FileName) and ReadEventLog(...). Also using this: PEVENTLOGRECORD Anyway, without supplying all of the code, here is the basic idea: 1. I get a handle to the .evt file using OpenBackupEventLog() and passing in a file name. 2. I then use ReadEventLog() to fill up a buffer with an unknown number of EventLog messages. 3. I traverse through the buffer and add each message to a vector 4. I keep filling up buffers (repeat steps 2 and 3) until I reach the end of the file. Here is my code for filling the vector: vector<PEVENTLOGRECORD> allRecords; while(_status == ERROR_SUCCESS) { if(!ReadEventLog(...)) CheckStatus(); else FillVectorFromBuffer(allRecords) } // Function FillVectorFromBuffer FillVectorFromBuffer(vector(PEVENTLOGRECORD) &allRecords) { int bytesExamined = 0; PBYTE pRecord = (PBYTE)_lpBuffer; // This is one of the params in ReadEventLog() while(bytesExamined < _pnBytesRead) // Another param from ReadEventLog { PEVENTLOGRECORD currentRecord = (PEVENTLOGRECORD)(pRecord); allRecords.push_back(currentRecord); pRecord += currentRecord->Length; bytesExamined += currentRecord->Length; } } Anyway, whenever I run this, it will get all the EventLogs in the file, and the vector will have everything I want it to. But as soon as this line: if(!ReadEventLog()) gets called and returns true (aka ReadEventLog() returns false), then every field in my vector gets set to zero. The vector will still contain the correct number of elements, it's just that all of the fields in the PEVENTLOGRECORD struct are now zero. Anyone with better debugging experience have any ideas? Thanks.

    Read the article

  • Using Loops for prompts with If/Else/Esif

    - by Dante
    I started with: puts "Hello there, and what's your favorite number?" favnum = gets.to_i puts "Your favorite number is #{favnum}?" " A better favorite number is #{favnum + 1}!" puts "Now, what's your favorite number greater than 10?" favnumOverTen = gets.to_i if favnumOverTen < 10 puts "Hey! I said GREATER than 10! Try again buddy." else puts "Your favorite number great than 10 is #{favnumOverTen}?" puts "A bigger and better number over 10 is #{favnumOverTen * 10}!" puts "It's literally 10 times better!" end That worked fine, but if the user entered a number less than 10 the program ended. I want the user to be prompted to try again until they enter a number greater than 10. Am I supposed to do that with a loop? Here's what I took a swing at, but clearly it's wrong: puts "Hello there, and what's your favorite number?" favnum = gets.to_i puts "Your favorite number is #{favnum}?" " A better favorite number is #{favnum + 1}!" puts "Now, what's your favorite number greater than 10?" favnumOverTen = gets.to_i if favnumOverTen < 10 loop.do puts "Hey! I said GREATER than 10! Try again buddy." favnumOverTen = gets.to_i until favnumOverTen > 10 else puts "Your favorite number great than 10 is #{favnumOverTen}?" puts "A bigger and better number over 10 is #{favnumOverTen * 10}!" puts "It's literally 10 times better!" end

    Read the article

  • What is a good approach to preloading data?

    - by Bob Horn
    Are there best practices out there for loading data into a database, to be used with a new installation of an application? For example, for application foo to run, it needs some basic data before it can even be started. I've used a couple options in the past: TSQL for every row that needs to be preloaded: IF NOT EXISTS (SELECT * FROM Master.Site WHERE Name = @SiteName) INSERT INTO [Master].[Site] ([EnterpriseID], [Name], [LastModifiedTime], [LastModifiedUser]) VALUES (@EnterpriseId, @SiteName, GETDATE(), @LastModifiedUser) Another option is a spreadsheet. Each tab represents a table, and data is entered into the spreadsheet as we realize we need it. Then, a program can read this spreadsheet and populate the DB. There are complicating factors, including the relationships between tables. So, it's not as simple as loading tables by themselves. For example, if we create Security.Member rows, then we want to add those members to Security.Role, we need a way of maintaining that relationship. Another factor is that not all databases will be missing this data. Some locations will already have most of the data, and others (that may be new locations around the world), will start from scratch. Any ideas are appreciated.

    Read the article

  • LoadError in Ruby

    - by wilhelmtell
    I'm having issues requiring 'digest/sha1'. ~$ ./configure --prefix=$HOME/usr --program-suffix=19 --enable-shared ~$ make ~$ make install ~$ irb19 irb(main):001:0> require 'digest/sha1' LoadError: dlopen(/Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle, 9): Symbol not found: _rb_Digest_SHA1_Finish Referenced from: /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle Expected in: flat namespace - /Users/matan/usr/lib/ruby19/1.9.1/i386-darwin9.8.0/digest/sha1.bundle from (irb):1:in `require' from (irb):1 from /Users/matan/usr/bin/irb19:12:in `<main>' irb(main):002:0> I know some standard modules require fine, while others don't. If i'd say require 'yaml' or even require 'digest' then that works fine. I am using OS X 10.5.8, with Ruby 1.9.1-p378. The system-wide install of Ruby 1.8.6 works fine. Just last week I uninstalled Ruby and re-installed it. When I first installed Ruby I installed it in a similar manner, from source prefixed at my local $HOME/usr directory. I tried removing each and every file make install installs, then re-installing, but that didn't help. Do you have an idea what the issue is and how to resolve it?

    Read the article

  • Custom Server to communicate with my software?

    - by Zachary Brown
    I am working on a major project that I need for work. I am working on a project that requires software validation. I would like this to be handled by a custom server I will write in Python, this server will be the "gateway" between the user and product activation. The software program will be purchased from other companies in volume licensing. So this is what I need the server to do: 1). The user clicks to activate their software. (easy, all is good so far) 2). The software gets the distributor's id from another online server. ( this is also easy, and already coded.) 3). Then, the software asks my custom server if the distributor is allowed to activate anymore copies of the software. 4). The server will then check ( an online encoded text file ) to see if the distributor can or can't. If they can, it will tell the software that registration can proceede, at which point the software will pass the software serial number to the server. I have done my best to explain what I am trying to accomplish, but if something is not quite sensible, please let me know. Thanks to all members of Stackoverflow.com for the help in the past, and those who will help me now. I am using Python 2.6 Win. XP Home Edition

    Read the article

  • How do I restart mysql and where is my.cnf file

    - by dorelal
    I am using snow leopard mac. I installed mysql on my machine using instructions mentioned here. Everything works great. However I have two questions. 1) Where is my.cnf file? I searched the whole file system and result is empty. Is it possible that there is no my.cnf and mysql works with default values. If yes then probably I should create my.inf at /etc/mysql. Is that right? 2) How do I restar server. I know it gets started when I restart my machine. Here is what plist looks like. mysqld_safe does not let me restart server. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <true/> <key>Label</key> <string>com.mysql.mysqld</string> <key>Program</key> <string>/usr/local/mysql/bin/mysqld_safe</string> <key>RunAtLoad</key> <true/> <key>UserName</key> <string>mysql</string> <key>WorkingDirectory</key> <string>/usr/local/mysql</string> </dict> </plist>

    Read the article

  • Access denied from another thread

    - by Lobuno
    Hello! In a program I span a thread ("the working thread"). Hera I copy some files write some data to a database and eventually, delete some other files or directories. Everything works fine. The problem is now, that I decided to move the deleting operation to some other thread. So the working thread now copies the files or directories, writes to the database, and , if there is a need to delete some other files this thread spans another thread and that second thread deleted the needed files or directories. The problem is that,the deletion used to work 100% when done in the working thread, now when the same is done in the secondary thread, I sometimes get an "Access denied" error and the files cannot be deleted. And no, the working thread is definitely NOT acceding the files and directories to delete at this moment. Sometimes (but not always) the main thread is impersonating some user, so if needed , the deleting thread is also running under impersonation just to grant the needed permissions to delete the files, so that should not be the problem. Anybody has a clue why this could be happening?

    Read the article

  • python crc32 woes

    - by lazyr
    I'm writing a python program to extract data from the middle of a 6 GB bz2 file. A bzip2 file is made up of independently decryptable blocks of data, so I only need to find a block (they are delimited by magic bits), then create a temporary one-block bzip2 file from it in memory, and finally pass that to the bz2.decompress function. Easy, no? The bzip2 format has a crc32 checksum for the file at the end. No problem, binascii.crc32 to the rescue. But wait. The data to be checksummed does not necessarily end on a byte boundary, and the crc32 function operates on a whole number of bytes. My plan: use the binascii.crc32 function on all but the last byte, and then a function of my own to update the computed crc with the last 1-7 bits. But hours of coding and testing has left me bewildered, and my puzzlement can be boiled down to this question: how come crc32("\x00") is not 0x00000000? Shouldn't it be, according to the wikipedia article? You start with 0b00000000 and pad with 32 0's, then do polynomial division with 0x04C11DB7 until there are no ones left in the first 8 bits, which is immediately. Your last 32 bits is the checksum, and how can that not be all zeroes? I've searched google for answers and looked at the code of several crc32 implementations without finding any clue to why this is so.

    Read the article

  • Callback function and function pointer trouble in C++ for a BST

    - by Brendon C.
    I have to create a binary search tree which is templated and can deal with any data types, including abstract data types like objects. Since it is unknown what types of data an object might have and which data is going to be compared, the client side must create a comparison function and also a print function (because also not sure which data has to be printed). I have edited some C code which I was directed to and tried to template, but I cannot figure out how to configure the client display function. I suspect variable 'tree_node' of class BinarySearchTree has to be passed in, but I am not sure how to do this. For this program I'm creating an integer binary search tree and reading data from a file. Any help on the code or the problem would be greatly appreciated :) Main.cpp #include "BinarySearchTreeTemplated.h" #include <iostream> #include <fstream> #include <string> using namespace std; /*Comparison function*/ int cmp_fn(void *data1, void *data2) { if (*((int*)data1) > *((int*)data2)) return 1; else if (*((int*)data1) < *((int*)data2)) return -1; else return 0; } static void displayNode() //<--------NEED HELP HERE { if (node) cout << " " << *((int)node->data) } int main() { ifstream infile("rinput.txt"); BinarySearchTree<int> tree; while (true) { int tmp1; infile >> tmp1; if (infile.eof()) break; tree.insertRoot(tmp1); } return 0; } BinarySearchTree.h (a bit too big to format here) http://pastebin.com/4kSVrPhm

    Read the article

< Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >