Search Results

Search found 22427 results on 898 pages for 'opn program'.

Page 778/898 | < Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • Problem with relative file path

    - by Richard Knop
    So here is my program, which works ok: import java.io.FileReader; import java.io.BufferedReader; import java.io.IOException; import java.util.Scanner; import java.util.Locale; public class ScanSum { public static void main(String[] args) throws IOException { Scanner s = null; double sum = 0; try { s = new Scanner(new BufferedReader(new FileReader("D:/java-projects/HelloWorld/bin/usnumbers.txt"))); s.useLocale(Locale.US); while (s.hasNext()) { if (s.hasNextDouble()) { sum += s.nextDouble(); } else { s.next(); } } } finally { s.close(); } System.out.println(sum); } } As you can see, I am using absolute path to the file I am reading from: s = new Scanner(new BufferedReader(new FileReader("D:/java-projects/HelloWorld/bin/usnumbers.txt"))); The problem arises when I try to use the relative path: s = new Scanner(new BufferedReader(new FileReader("usnumbers.txt"))); I get an error: Exception in thread "main" java.lang.NullPointerException at ScanSum.main(ScanSum.java:24) The file usnumbers.txt is in the same directory as the ScanSum.class file: D:/java-projects/HelloWorld/bin/ScanSum.class D:/java-projects/HelloWorld/bin/usnumbers.txt How could I solve this?

    Read the article

  • Downloading a file from a PHP page in C#

    - by FoxyShadoww
    Okay, we have a PHP script that creates an download link from a file and we want to download that file via C#. This works fine with progress etc but when the PHP page gives an error the program downloads the error page and saves it as the requested file. Here is the code we have atm: PHP Code: <?php $path = 'upload/test.rar'; if (file_exists($path)) { $mm_type="application/octet-stream"; header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Cache-Control: public"); header("Content-Description: File Transfer"); header("Content-Type: " . $mm_type); header("Content-Length: " .(string)(filesize($path)) ); header('Content-Disposition: attachment; filename="'.basename($path).'"'); header("Content-Transfer-Encoding: binary\n"); readfile($path); exit(); } else { print 'Sorry, we could not find requested download file.'; } ?> C# Code: private void btnDownload_Click(object sender, EventArgs e) { string url = "http://***.com/download.php"; WebClient client = new WebClient(); client.DownloadFileCompleted += new AsyncCompletedEventHandler(client_DownloadFileCompleted); client.DownloadProgressChanged += new DownloadProgressChangedEventHandler(ProgressChanged); client.DownloadFileAsync(new Uri(url), @"c:\temp\test.rar"); } private void ProgressChanged(object sender, DownloadProgressChangedEventArgs e) { progressBar.Value = e.ProgressPercentage; } void client_DownloadFileCompleted(object sender, AsyncCompletedEventArgs e) { MessageBox.Show(print); }

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • Installer downloads wrong version of sql server compact edition

    - by MartinStettner
    I have a Visual Studio setup project which defines SQL Server 3.5 as prerequisite. I also set it up to download the required files from the source (i.e. Microsoft). My project uses Sql Server 3.5 SP1. If I create the installer on my development machine, everything works fine. When the installer is built on the build machine, it downloads the "old" 3.5 version (without SP1) which makes my application crash. (If I install SP1 on the test system manually, the application just works find...) I installed both Visual Studio 2008 SP1 and Sql Server 3.5 SP1 on the build machine. This seems to be some issue with the prerequisite packages but they look the same on the development and the build machines. Any idea what goes wrong here? EDIT Ok, I rechecked it and the package descriptions are'nt the sames. Oddly enough, the .msi file is the right one on the build machine but the old one on the development machine, so I guess if I tried to build an installer which includes the prerequisites, I'd get the wrong result on the dev machine and the right one on the build machine. Extended question: How are the prerequesite packages (under C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\SQL Server Compact Edition) meant to be updated to SP1? Is the SQL Server CE SP1 installer responsible for this?

    Read the article

  • The rules to connect a web service trough the SSL and Certificates

    - by blgnklc
    There is a web service running on tomcat on a server. It is built on Java Servlet. It is listening others to call itself on a SSL enabled http port. so its web service adreess looks like: https://172.29.12.12/axis/services/XYZClient?wsdl On the other hand I want to connect the web service above from a windows application which is built on .NET frame work. Finally, when I want to connect the web service from my computer; I get some specific erros; Firstly I get; Proxy authentication error; then I added some new line to my code; Dim cr As System.Net.NetworkCredential = New System.Net.NetworkCredential("xname", "xsurname", "xdomainname") Dim myProxy As New WebProxy("http://mar.xxxyyy.com", True) myProxy.Credentials = cr Secondly, after this modifications It says that bad request. I did not get over this error. Moreover I did try to connect the web server on the same computer. I copied my executable program to the computer where the web service runs. The error was like; The underlying connection was closed: Could not establish trust relationship for SSL/TLS secure channel PS: When I try to connect to web service by using Internet Explorer; I see firstly some warnings about accepting an unknown certificate and I click take me to web service an I get there clearly. I want to know what are the basic elements to connect a web service, could you please tell me the requirements that I have to use on my windows project. regards bk

    Read the article

  • C# 4.0 'dynamic' doesn't set ref/out arguments

    - by Buu Nguyen
    I'm experimenting with DynamicObject. One of the things I try to do is setting the values of ref/out arguments, as shown in the code below. However, I am not able to have the values of i and j in Main() set properly (even though they are set correctly in TryInvokeMember()). Does anyone know how to call a DynamicObject object with ref/out arguments and be able to retrieve the values set inside the method? class Program { static void Main(string[] args) { dynamic proxy = new Proxy(new Target()); int i = 10; int j = 20; proxy.Wrap(ref i, ref j); Console.WriteLine(i + ":" + j); // Print "10:20" while expect "20:10" } } class Proxy : DynamicObject { private readonly Target target; public Proxy(Target target) { this.target = target; } public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { int i = (int) args[0]; int j = (int) args[1]; target.Swap(ref i, ref j); args[0] = i; args[1] = j; result = null; return true; } } class Target { public void Swap(ref int i, ref int j) { int tmp = i; i = j; j = tmp; } }

    Read the article

  • create an independent hidden process

    - by Jessica
    I'm creating an application with its main window hidden by using the following code: STARTUPINFO siStartupInfo; PROCESS_INFORMATION piProcessInfo; memset(&siStartupInfo, 0, sizeof(siStartupInfo)); memset(&piProcessInfo, 0, sizeof(piProcessInfo)); siStartupInfo.cb = sizeof(siStartupInfo); siStartupInfo.dwFlags = STARTF_USESHOWWINDOW | STARTF_FORCEOFFFEEDBACK | STARTF_USESTDHANDLES; siStartupInfo.wShowWindow = SW_HIDE; if(CreateProcess(MyApplication, "", 0, 0, FALSE, 0, 0, 0, &siStartupInfo, &piProcessInfo) == FALSE) { // blah return 0; } Everything works correctly except my main application (the one calling this code) window loses focus when I open the new program. I tried lowering the priority of the new process but the focus problem is still there. Is there anyway to avoid this? furthermore, is there any way to create another process without using CreateProcess (or any of the API's that call CreateProcess like ShellExecute)? My guess is that my app is losing focus because it was given to the new process, even when it's hidden. To those of you curious out there that will certainly ask the usual "why do you want to do this", my answer is because I have a watchdog process that cannot be a service and it gets started whenever I open my main application. Satisfied? Thanks for the help. Code will be appreciated. Jess.

    Read the article

  • What's the correct way to do a "catch all" error check on an fstream output operation?

    - by Truncheon
    What's the correct way to check for a general error when sending data to an fstream? UPDATE: My main concern regards some things I've been hearing about a delay between output and any data being physically written to the hard disk. My assumption was that the command "save_file_obj << save_str" would only send data to some kind of buffer and that the following check "if (save_file_obj.bad())" would not be any use in determining if there was an OS or hardware problem. I just wanted to know what was the definitive "catch all" way to send a string to a file and check to make certain that it was written to the disk, before carrying out any following actions such as closing the program. I have the following code... int Saver::output() { save_file_handle.open(file_name.c_str()); if (save_file_handle.is_open()) { save_file_handle << save_str.c_str(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } save_file_handle.close(); if (save_file_handle.bad()) { x_message("Error - failed to save file"); return 0; } return 1; } else { x_message("Error - couldn't open save file"); return 0; } }

    Read the article

  • Java swing app can't find image

    - by KáGé
    Hello, I'm making a torpedo game for school in java with swing gui, please see the zipped source HERE. I use custom button icons and mouse cursors of images stored in the /bin/resource/graphics/default folder's subfolders, where the root folder is the program's root folder (it will be the root in the final .jar as well I suppose) which apart from "bin" contains a "main" folder with all the classes. The relative path of the resources is stored in MapStruct.java's shipPath and mapPath variables. Now Battlefield.java's PutPanel class finds them all right and sets up its buttons' icons fine, but every other class fail to get their icons, e.g. Table.java's setCursor, which should set the mouse cursor for all its elements for the selected ship's image or Field.java's this.button.setIcon(icon); in the constructor, which should set the icon for the buttons of the "water". I watched with debug what happens, and the images stay null after loading, though the paths seem to be correct. I've also tried to write a test file in the image folder but the method returns a filenotfound exception. I've tried to get the path of the class to see if it runs from the supposed place and it seems it does, so I really can't find the problem now. Could anyone please help me? Thank you.

    Read the article

  • Python 3.3 Webserver restarting problems

    - by IPDGino
    I have made a simple webserver in python, and had some problems with it before as described here: Python (3.3) Webserver script with an interesting error In that question, the answer was to use a While True: loop so that any crashes or errors would be resolved instantly, because it would just start itself again. I've used this for a while, and still want to make the server restart itself every few minutes, but on Linux for some reason it won't work for me. On windows the code below works fine, but on linux it keeps saying Handler class up here ... ... class Server: def __init__(self): self.server_class = HTTPServer self.server_adress = ('MY IP GOES HERE, or localhost', 8080) global httpd httpd = self.server_class(self.server_adress, Handler) self.main() def main(self): if count > 1: global SERVER_UP_SINCE HOUR_CHECK = int(((count - 1) * RESTART_INTERVAL) / 60) SERVER_UPTIME = str(HOUR_CHECK) + " MINUTES" if HOUR_CHECK > 60: minutes = int(HOUR_CHECK % 60) hours = int(HOUR_CHECK // 60) SERVER_UPTIME = ("%s HOURS, %s MINUTES" % (str(hours), str(minutes))) SERVING_ON_ADDR = self.server_adress SERVER_UP_SINCE = str(SERVER_UP_SINCE) SERVER_RESTART_NUMBER = count - 1 print(""" SERVER INFO ------------------------------------- SERVER_UPTIME: %s SERVER_UP_SINCE: %s TOTAL_FILES_SERVED: %d SERVING_ON_ADDR: %s SERVER_RESTART_NUMBER: %s \n\nSERVER HAS RESTARTED """ % (SERVER_UPTIME, SERVER_UP_SINCE, TOTAL_FILES, SERVING_ON_ADDR, SERVER_RESTART_NUMBER)) else: print("SERVER_BOOT=1\nSERVER_ONLINE=TRUE\nRESTART_LOOP=TRUE\nSERVING_ON_ADDR:%s" % str(self.server_adress)) while True: try: httpd.serve_forever() except KeyboardInterrupt: print("Shutting down...") break httpd.shutdown() httpd.socket.close() raise(SystemExit) return def server_restart(): """If you want the restart timer to be longer, replace the number after the RESTART_INTERVAL variable""" global RESTART_INTERVAL RESTART_INTERVAL = 10 threading.Timer(RESTART_INTERVAL, server_restart).start() global count count = count + 1 instance = Server() if __name__ == "__main__": global SERVER_UP_SINCE SERVER_UP_SINCE = strftime("%d-%m-%Y %H:%M:%S", gmtime()) server_restart() Basically, I make a thread to restart it every 10 seconds (For testing purposes) and start the server. After ten seconds it will say File "/home/username/Desktop/Webserver/server.py", line 199, in __init__ httpd = self.server_class(self.server_adress, Handler) File "/usr/lib/python3.3/socketserver.py", line 430, in __init__ self.server_bind() File "/usr/lib/python3.3/http/server.py", line 135, in server_bind socketserver.TCPServer.server_bind(self) File "/usr/lib/python3.3/socketserver.py", line 441, in server_bind self.socket.bind(self.server_address) OSError: [Errno 98] Address already in use As you can see in the except KeyboardInterruption line, I tried everything to make the server stop, and the program stop, but it will NOT stop. But the thing I really want to know is how to make this server able to restart, without giving some wonky errors.

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • C#/.NET library for source code formatting, like the one used by Stack Overflow?

    - by Lasse V. Karlsen
    I am writing a command line tool to convert Markdown text to html output, which seems easy enough. However, I am wondering how to get nice syntax coloring for embedded code blocks, like the one used by Stack Overflow. Does anyone know either: What library StackOverflow is using or if there's a library out there that I can easily reuse? Basically it would need to have some of the same "intelligence" found in the one that Stack Overflow uses, by basically doing a best-attempt at figuring out the language in use to pick the right keywords. Basically, what I want is for my own program to handle a block like the following: if (a == 0) return true; if (a == 1) return false; // fall-back Markdown Sharp, the library I'm using, by default outputs the above as a simple pre/code html block, with no syntax coloring. I'd like the same type of handling as the formatting on Stack Overflow does, the above contains blue "return" keywords for example. Or, hmm, after checking the source of this Stack Overflow page after adding the code example, I notice that it too is formatted like a simple pre/code block. Is it pure javascript-magic at works here, so perhaps there's no such library? If there's no library that will automagically determine a possible language by the keywords used, is there one that would work if I explicitly told it the language? Since this is "my" markdown-commandline-tool, I can easily add syntax if I need to.

    Read the article

  • VSC++, virtual method at bad adress, curious bug

    - by antoon.groenewoud
    Hello, This guy: virtual phTreeClass* GetTreeClass() const { return (phTreeClass*)m_entity_class; } When called, crashed the program with an access violation, even after a full recompile. All member functions and virtual member functions had correct memory adresses (I hovered mouse over the methods in debug mode), but this function had a bad memory adress: 0xfffffffc. Everything looked okay: the 'this' pointer, and everything works fine up until this function call. This function is also pretty old and I didn't change it for a long time. The problem just suddenly popped up after some work, which I commented all out to see what was doing it, without any success. So I removed the virtual, compiled, and it works fine. I add virtual, compiled, and it still works fine! I basically changed nothing, and remember that I did do a full recompile earlier, and still had the error back then. I wasn't able to reproduce the problem. But now it is back. I didn't change anything. Removing virtual fixes the problem. Sincerely, Antoon

    Read the article

  • What's the best way to resolve this scope problem?

    - by Peter Stewart
    I'm writing a program in python that uses genetic techniques to optimize expressions. Constructing and evaluating the expression tree is the time consumer as it can happen billions of times per run. So I thought I'd learn enough c++ to write it and then incorporate it in python using cython or ctypes. I've done some searching on stackoverflow and learned a lot. This code compiles, but leaves the pointers dangling. I tried 'this_node = new Node(...' . It didn't seem to work. And I'm not at all sure how I'd delete all the references as there would be hundreds. I'd like to use variables that stay in scope, but maybe that's not the c++ way. What is the c++ way? class Node { public: char *cargo; int depth; Node *left; Node *right; } Node make_tree(int depth) { depth--; if(depth <= 0) { Node tthis_node("value",depth,NULL,NULL); return tthis_node; } else { Node this_node("operator" depth, &make_tree(depth), &make_tree(depth)); return this_node; } };

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • How to create a "Shell IDList Array" to support drag-and-drop of virtual files from C# to Windows Ex

    - by JustABill
    I started trying to implement drag-and-drop of virtual files (from a C# 4/WPF app) with this codeplex tutorial. After spending some time trying to figure out a DV_E_FORMATETC error, I figured out I need to support the "Shell IDList Array" data format. But there seems to be next to zero documentation on what this format actually does. After some searching, I found this page on advanced data transfer which said that a Shell IDList Array was a pointer to a CIDA structure. This CIDA structure then contains the number of PIDLs, and a list of offsets to them. So what the hell is a PIDL? After some more searching, this page sort of implies it's a pointer to an ITEMIDLIST, which itself contains a single member that is a list of SHITEMIDs. My next idea was to try dragging a file from another application with virtual files. I just got a MemoryStream back for this format. At least I know what class to provide for the thing, but that doesn't help at all for explaining what to put in it. So now that that's explained, I still have no idea how to create one of these things so that it's valid. There's two real questions here: What is a valid "abID" member for a SHITEMID? These virtual files only exist with my program; will the thing they are dragged to pass the "abID" back later when it executes GetData? Do they have to be system-unique? Why are there two levels of lists; a list of PIDLs and each PIDL has a list of SHITEMIDs? I'm assuming one of them is one for each file, but what's the other one for? Multiple IDs for the same file? Any help or even links that explain what I should be doing would be greatly appreciated.

    Read the article

  • Private member vector of vector dynamic memory allocation

    - by Geoffroy
    Hello, I'm new to C++ (I learned programming with Fortran), and I would like to allocate dynamically the memory for a multidimensional table. This table is a private member variable : class theclass{ public: void setdim(void); private: std::vector < std::vector <int> > thetable; } I would like to set the dimension of thetable with the function setdim(). void theclass::setdim(void){ this->thetable.assign(1000,std::vector <int> (2000)); } I have no problem compiling this program, but as I execute it, I've got a segmentation fault. The strange thing for me is that this piece (see under) of code does exactly what I want, except that it doesn't uses the private member variable of my class : std::vector < std::vector < int > > thetable; thetable.assign(1000,std::vector <int> (2000)); By the way, I have no trouble if thetable is a 1D vector. In theclass : std::vector < int > thetable; and if in setdim : this->thetable.assign(1000,2); So my question is : why is there such a difference with "assign" between thetable and this-thetable for a 2D vector? And how should I do to do what I want? Thank-you for your help, Best regards, -- Geoffroy

    Read the article

  • Refactoring code/consolidating functions (e.g. nested for-loop order)

    - by bmay2
    Just a little background: I'm making a program where a user inputs a skeleton text, two numbers (lower and upper limit), and a list of words. The outputs are a series of modifications on the skeleton text. Sample inputs: text = "Player # likes @." (replace # with inputted integers and @ with words in list) lower = 1 upper = 3 list = "apples, bananas, oranges" The user can choose to iterate over numbers first: Player 1 likes apples. Player 2 likes apples. Player 3 likes apples. Or words first: Player 1 likes apples. Player 1 likes bananas. Player 1 likes oranges. I chose to split these two methods of outputs by creating a different type of dictionary based on either number keys (integers inputted by the user) or word keys (from words in the inputted list) and then later iterating over the values in the dictionary. Here are the two types of dictionary creation: def numkey(dict): # {1: ['Player 1 likes apples', 'Player 1 likes...' ] } text, lower, upper, list = input_sort(dict) d = {} for num in range(lower,upper+1): l = [] for i in list: l.append(text.replace('#', str(num)).replace('@', i)) d[num] = l return d def wordkey(dict): # {'apples': ['Player 1 likes apples', 'Player 2 likes apples'..] } text, lower, upper, list = input_sort(dict) d = {} for i in list: l = [] for num in range(lower,upper+1): l.append(text.replace('#', str(num)).replace('@', i)) d[i] = l return d It's fine that I have two separate functions for creating different types of dictionaries but I see a lot of repetition between the two. Is there any way I could make one dictionary function and pass in different values to it that would change the order of the nested for loops to create the specific {key : value} pairs I'm looking for? I'm not sure how this would be done. Is there anything related to functional programming or other paradigms that might help with this? The question is a little abstract and more stylistic/design-oriented than anything.

    Read the article

  • C++ Problems with #import of .NET out-of-proc server.

    - by jm
    In C++ program, I am trying to #import TLB of .NET out of proc server. I get errors like: z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' z:\server.tlh(111) : error C2501: 'TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' z:\server.tli(74) : error C2433: 'TypePtr' : 'inline' not permitted on data declarations z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: ... _bstr_t GetToString ( ); VARIANT_BOOL Equals ( const _variant_t & obj ); long GetHashCode ( ); _TypePtr GetType ( ); long Open ( ); ... I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import MSCORLIB.TLB (or put it in path), but I can't get that to compile either. Any tips?

    Read the article

  • implementing gravity to projectile - delta time issue

    - by Murat Nafiz
    I'm trying to implement a simple projectile motion in Android (with openGL). And I want to add gravity to my world to simulate a ball's dropping realistically. I simply update my renderer with a delta time which is calculated by: float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); In my screen.update(deltaTime) method: if (isballMoving) { golfBall.updateLocationAndVelocity(deltaTime); } And in golfBall.updateLocationAndVelocity(deltaTime) method: public final static double G = -9.81; double vz0 = getVZ0(); // Gets initial velocity(z) double z0 = getZ0(); // Gets initial height double time = getS(); // gets total time from act begin double vz = vz0 + G * deltaTime; // calculate new velocity(z) double z = z0 - vz0 * deltaTime- 0.5 * G * deltaTime* deltaTime; // calculate new position time = time + deltaTime; // Update time setS(time); //set new total time Now here is the problem; If I set deltaTime as 0.07 statically, then the animation runs normally. But since the update() method runs as faster as it can, the length and therefore the speed of the ball varies from device to device. If I don't touch deltaTime and run the program (deltaTime's are between 0.01 - 0.02 with my test devices) animation length and the speed of ball are same at different devices. But the animation is so SLOW! What am I doing wrong?

    Read the article

  • How to get bit rotation function to accept any bit size?

    - by calccrypto
    i have these 2 functions i got from some other code def ROR(x, n): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (32 - n)) def ROL(x, n): return ROR(x, 32 - n) and i wanted to use them in a program, where 16 bit rotations are required. however, there are also other functions that require 32 bit rotations, so i wanted to leave the 32 in the equation, so i got: def ROR(x, n, bits = 32): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (bits - n)) def ROL(x, n, bits = 32): return ROR(x, bits - n) however, the answers came out wrong when i tested this set out. yet, the values came out correctly when the code is def ROR(x, n): mask = (2L**n) - 1 mask_bits = x & mask return (x >> n) | (mask_bits << (16 - n)) def ROL(x, n,bits): return ROR(x, 16 - n) what is going on and how do i fix this?

    Read the article

  • Memory Profiling with DotTrace Questions

    - by cam
    I ran dotTrace on my application (which is having some issues). IntPtr System.Windows.Forms.UnsafeNativeMethods.CallWindowProc(IntPtr, IntPtr, Int32, IntPtr, IntPtr) Void System.Windows.Forms.UnsafeNativeMethods.WaitMessage() Are the two main functions that came up, taking about 94% of the application time. Since I didn't know what these two functions were, I ran through my code line by line. It runs smooth and efficiently until a point where it just hangs. "newFrm.Show()". The newFrm only contains a textbox. The larger the file I load into the text box (it's a notepad program), the longer it takes. Now normally this makes sense, but it takes about 30 seconds for a 167 kB file. Now I'm not sure what to do. It runs incredibly slow/stops functioning when you load a textfile and try to resize the window containing the text file too. Then I realized that it is only struggling to open text files with a long string of hex inside (ie) "XX-XX-XX-" etc. With other similarly sized files it struggles with resizing somewhat, but opens within a couple seconds. Does this have something to do with the textbox properties? I've set it to multiline and set maximum characters to 0 (so unlimited). How do I solve this issue? Is there some way I can see what is being called in those functions?

    Read the article

  • C# / XNA - Load objects to the memory - how it works?

    - by carl
    Hello I'm starting with C# and XNA. In the "Update" method of the "Game" class I have this code: t = Texture2D.FromFile( [...] ); //t is a 'Texture2D t;' which loads small image. "Update" method is working like a loop, so this code is called many times in a second. Now, when I run my game, it takes 95MB of RAM and it goes slowly to about 130MB (due to the code I've posted, without this code it remains at 95MB), then goes immediately to about 100MB (garbare colletion?) and again goes slowly to 130MB, then immediately to 100MB and so on. So my first question: Can You explain why (how) it works like that? I've found, that if I change the code to: t.Dispose() t = Texture2D.FromFile( [...] ); it works like that: first it takes 95MB and then goes slowly to about 101MB (due to the code) and remains at this level. I don't understand why it takes this 6MB (101-95)... ? I want to make it works like that: load image, release from memory, load image, release from memory and so on, so the program should always takes 95MB (it takes 95MB when image is loaded only once in previous method). Whats instructions should I use? If it is important, the size of the image is about 10KB. Thank You!

    Read the article

  • How to set Delphi WebBrowser Base directory different that HTML location

    - by Steve
    I have a Delphi program that creates HTML files. Later when a button is pressed a TWebBrowser is created and a WebBrowser.Navigate causes the html page to display. Is there anyway to set the WebBrowser "default directory" so it will always be the location of the Delphi executable not the HTML file? I do not want to set the Base value in the HTML to a hardcoded value because then when the HTML is ran from another Delphi exe they are not found. for example: if the exe is run from D:\data\delphi\pgm.exe then the base location D:\data\delphi\ and the jpgs are at D:\data\delphi\jpgs\ but if the exe is run from: C:\stuff\pgm.exe i want the base location to be C:\stuff\ and the jpgs to be at C:\stuff\jpgs\ So I cannot write a line in the HTML with the base location since when it is ran from another exe it would point to wrong location for that exe. So I either need to set the base location when I create the webbrowser and before I read the HTML or I need a way to pass into the webbrowser the location where I can then set the base location. Sorry for being so long-winded but I could not figure out how to saw what I needed.

    Read the article

< Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >