Search Results

Search found 42468 results on 1699 pages for 'default program'.

Page 574/1699 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • How to eliminate tearing from animation?

    - by MusiGenesis
    I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh. When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio. Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem? Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?

    Read the article

  • Atomic swap in GNU C++

    - by Steve
    I want to verify that my understanding is correct. This kind of thing is tricky so I'm almost sure I am missing something. I have a program consisting of a real-time thread and a non-real-time thread. I want the non-RT thread to be able to swap a pointer to memory that is used by the RT thread. From the docs, my understanding is that this can be accomplished in g++ with: // global Data *rt_data; Data *swap_data(Data *new_data) { #ifdef __GNUC__ // Atomic pointer swap. Data *old_d = __sync_lock_test_and_set(&rt_data, new_data); #else // Non-atomic, cross your fingers. Data *old_d = rt_data; rt_data = new_data; #endif return old_d; } This is the only place in the program (other than initial setup) where rt_data is modified. When rt_data is used in the real-time context, it is copied to a local pointer. For old_d, later on when it is sure that the old memory is not used, it will be freed in the non-RT thread. Is this correct? Do I need volatile anywhere? Are there other synchronization primitives I should be calling? By the way I am doing this in C++, although I'm interested in whether the answer differs for C. Thanks ahead of time.

    Read the article

  • Help with PHP simplehtmldom - Modifiying a form.

    - by onemyndseye
    Ive gotten some great help here and I am so close to solving my problem that I can taste it. But I seem to be stuck. I need to scrape a simple form from a local webserver and only return the lines that match a users local email (i.e. onemyndseye@localhost). simplehtmldom makes easy work of extracting the correct form element: foreach($html->find('form[action*="delete"]') as $form) echo $form; Returns: <form action="/delete" method="post"> <input type="checkbox" id="D1" name="D1" /><a href="http://www.linux.com/rss/feeds.php"> http://www.linux.com/rss/feeds.php </a> [email: onemyndseye@localhost (Default) ]<br /> <input type="checkbox" id="D2" name="D2" /><a href="http://www.ubuntu.com/rss.xml"> http://www.ubuntu.com/rss.xml </a> [email: onemyndseye@localhost (Default) ]<br /> However I am having trouble making the next step. Which is returning lines that contain 'onemyndseye@localhost' and removing it so that only the following is returned: <input type="checkbox" id="D1" name="D1" /><a href="http://www.linux.com/rss/feeds.php">http://www.linux.com/rss/feeds.php</a> <br /> <input type="checkbox" id="D2" name="D2" /><a href="http://www.ubuntu.com/rss.xml">http://www.ubuntu.com/rss.xml</a> <br /> Thanks to the wonderful users of this site Ive gotten this far and can even return just the links but I am having trouble getting the rest... Its important that the complete <input> tags are returned EXACTLY as shown above as the id and name values will need to be passed back to the original form in post data later on. Thanks in advance!

    Read the article

  • SQL Server getdate() to a string like "2009-12-20"

    - by Adam Kane
    In Microsoft SQL Server 2005 and .NET 2.0, I want to convert the current date to a string of this format: "YYYY-MM-DD". For example, December 12th 2009 would become "2009-12-20". How do I do this in SQL. The context of this SQL statement in the table definiton. In other words, this is the default value. So when a new record is created the default value of the current date is stored as a string in the above format. I'm trying: SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD] But SQL server keeps converting that to: ('SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]') so the result is just: 'SELECT CONVERT(VARCHAR(10), GETDATE(), 102) AS [YYYY.MM.DD]' Here's a screen shot of what the Visual Studio server explorer, table, table definition, properties shows: These wrapper bits are being adding automatically and converting it all to literal string: (N' ') Here's the reason I'm trying to use something other than the basic DATETIME I was using previously: This is the error I get when hooking everything to an ASP.NET GridView and try to do an update via the grid view: Server Error in '/' Application. The version of SQL Server in use does not support datatype 'date'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: The version of SQL Server in use does not support datatype 'date'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentException: The version of SQL Server in use does not support datatype 'date'.] Note: I've added a related question to try to get around the SQL Server in use does not support datatype 'date' error so that I can use a DATETIME as recommended.

    Read the article

  • C++ Switch won't compile with externally defined variable used as case

    - by C Nielsen
    I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant". Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be? Below is an example: test.cpp #include <iostream> #include "x_def.h" int main() { std::cout << "Main Entered" << std::endl; switch(0) { case test_int: std::cout << "Case X" << std::endl; break; default: std::cout << "Case Default" << std::endl; break; } return 0; } x_def.h extern const int test_int; x_def.cpp const int test_int = 0; This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious? Here's my compiler version information: Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3)

    Read the article

  • How can you get the call tree with python profilers?

    - by Oliver
    I used to use a nice Apple profiler that is built into the System Monitor application. As long as your C++ code was compiled with debug information, you could sample your running application and it would print out an indented tree telling you what percent of the parent function's time was spent in this function (and the body vs. other function calls). For instance, if main called function_1 and function_2, function_2 calls function_3, and then main calls function_3: main (100%, 1% in function body): function_1 (9%, 9% in function body): function_2 (90%, 85% in function body): function_3 (100%, 100% in function body) function_3 (1%, 1% in function body) I would see this and think, "Something is taking a long time in the code in the body of function_2. If I want my program to be faster, that's where I should start." Does anyone know how I can most easily get this exact profiling output for a python program? I've seen people say to do this: import cProfile, pstats prof = cProfile.Profile() prof = prof.runctx("real_main(argv)", globals(), locals()) stats = pstats.Stats(prof) stats.sort_stats("time") # Or cumulative stats.print_stats(80) # 80 = how many to print but it's quite messy compared to that elegant call tree. Please let me know if you can easily do this, it would help quite a bit. Cheers!

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Execute process conditionally in Windows PowerShell (e.g. the && and || operators in Bash)

    - by Dustin
    I'm wondering if anybody knows of a way to conditionally execute a program depending on the exit success/failure of the previous program. Is there any way for me to execute a program2 immediately after program1 if program1 exits successfully without testing the LASTEXITCODE variable? I tried the -band and -and operators to no avail, though I had a feeling they wouldn't work anyway, and the best substitute is a combination of a semicolon and an if statement. I mean, when it comes to building a package somewhat automatically from source on Linux, the && operator can't be beaten: # Configure a package, compile it and install it ./configure && make && sudo make install PowerShell would require me to do the following, assuming I could actually use the same build system in PowerShell: # Configure a package, compile it and install it .\configure ; if ($LASTEXITCODE -eq 0) { make ; if ($LASTEXITCODE -eq 0) { sudo make install } } Sure, I could use multiple lines, save it in a file and execute the script, but the idea is for it to be concise (save keystrokes). Perhaps it's just a difference between PowerShell and Bash (and even the built-in Windows command prompt which supports the && operator) I'll need to adjust to, but if there's a cleaner way to do it, I'd love to know.

    Read the article

  • extension methods with generics - when does caller need to include type parameters?

    - by Greg
    Hi, Is there a rule for knowing when one has to pass the generic type parameters in the client code when calling an extension method? So for example in the Program class why can I (a) not pass type parameters for top.AddNode(node), but where as later for the (b) top.AddRelationship line I have to pass them? class Program { static void Main(string[] args) { // Create Graph var top = new TopologyImp<string>(); // Add Node var node = new StringNode(); node.Name = "asdf"; var node2 = new StringNode(); node2.Name = "test child"; top.AddNode(node); top.AddNode(node2); top.AddRelationship<string, RelationshipsImp>(node,node2); // *** HERE *** } } public static class TopologyExtns { public static void AddNode<T>(this ITopology<T> topIf, INode<T> node) { topIf.Nodes.Add(node.Key, node); } public static INode<T> FindNode<T>(this ITopology<T> topIf, T searchKey) { return topIf.Nodes[searchKey]; } public static void AddRelationship<T,R>(this ITopology<T> topIf, INode<T> parentNode, INode<T> childNode) where R : IRelationship<T>, new() { var rel = new R(); rel.Child = childNode; rel.Parent = parentNode; } } public class TopologyImp<T> : ITopology<T> { public Dictionary<T, INode<T>> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<T, INode<T>>(); } }

    Read the article

  • Determine if the current thread has low I/O priority

    - by Magnus Hoff
    I have a background thread that does some I/O-intensive background type work. To please the other threads and processes running, I set the thread priority to "background mode" using SetThreadPriority, like this: SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN); However, THREAD_MODE_BACKGROUND_BEGIN is only available in Windows Server 2008 or newer, as well as Windows Vista and newer, but the program needs to work well on Windows Server 2003 and XP as well. So the real code is more like this: if (!SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN)) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST); } The problem with this is that on Windows XP it will totally disrupt the system by using too much I/O. I have a plan for a ugly and shameful way of mitigating this problem, but that depends on me being able to determine if the current thread has low I/O priority or not. Now, I know I can store which thread priority I ended up setting, but the control flow in the program is not really well suited for this. I would rather like to be able to test later whether or not the current thread has low I/O priority -- if it is in "background mode". GetThreadPriority does not seem to give me this information. Is there any way to determine if the current thread has low I/O priority?

    Read the article

  • How to get application context path in spring-ws?

    - by Dhaliwal
    I am using Spring-WS to create a webservice. In my project, I have created a Helper class to reads sample response and request xml file which are located in my /src/main/resource folder. When I am unit-testing my webservice application 'locally', I use the System.getProperty("user.dir") to get the application context folder. The following is a method that I created in the Helper class to help me retrieve the file that I am interested in my resource folder. public static File getFileFromResources(String filename) { System.out.println("Getting file from resource folder"); File request = null; String curDir = System.getProperty("user.dir"); String contextpath = "C:\\src\\main\\resources\\"; request = new File(curDir + contextpath + filename); return request; } However, after 'publishing' the compiled WAR file to the ../webapps folder to the Apache Tomcat directory, I realise that System.getProperty("user.dir") no longer returns my application context path. Instead, it is returning the Apache Tomcat root directory as shown C:\Program Files\Apache Software Foundation\Tomcat 6.0\src\main\resources\SampleClientFile I cant seem to find any information about getting the root folder of my webservice. I have seen examples on Spring web application where I can retrieve the context path by using the following : request.getSession().getServletContext().getContextPath() But in this case, I am using a Spring web application where there is a servlet context in the request. But the Spring-WS, my entry point is an endpoint. How can I get the context path of my webservice application. I am expecting a context path of something like C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\clientWebService\WEB-INF\classes Could someone suggest a way to achieve this?

    Read the article

  • How to query MySQL for exact length and exact UTF-8 characters

    - by oskarae
    I have table with words dictionary in my language (latvian). CREATE TABLE words ( value varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; And let's say it has 3 words inside: INSERT INTO words (value) VALUES ('teja'); INSERT INTO words (value) VALUES ('vejš'); INSERT INTO words (value) VALUES ('feja'); What I want to do is I want to find all words that is exactly 4 characters long and where second character is 'e' and third character is 'j' For me it feels that correct query would be: SELECT * FROM words WHERE value LIKE '_ej_'; But problem with this query is that it returs not 2 entries ('teja','vejš') but all three. As I understand it is because internally MySQL converts strings to some ASCII representation? Then there is BINARY addition possible for LIKE SELECT * FROM words WHERE value LIKE BINARY '_ej_'; But this also does not return 2 entries ('teja','vejš') but only one ('teja'). I believe this has something to do with UTF-8 2 bytes for non ASCII chars? So question: What MySQL query would return my exact two words ('teja','vejš')? Thank you in advance

    Read the article

  • ARM Assembly - Converting Endianness

    - by SoulBeaver
    Hello people! This is currently a homework project that me and my teammate are stuck on. We haven't been given much of an introduction into Assembly, and this is supposed to be our first homework exercise. The task is to create a program that converts 0xAABBCCDD into 0xDDCCBBAA. I'm not looking for an answer, as that would defeat the purpose, but we are getting severely frustrated over the difficulty of this stupid thing. We think we have a good start in creating a viable solution, but we just cannot come up with the rest of the program. First, we mask every single tupel (aa), (bb), (cc), (dd) into a different register: LDR R0, LittleEndian // 0xAABBCCDD AND R1, R0, #0xFF000000 // 0xAA AND R2, R0, #0x00FF0000 // 0xBB AND R3, R0, #0x0000FF00 // 0xCC AND R4, R0, #0x000000FF // 0xDD Then we try to re-align them into the R0 register, but hell if we could come up with a good solution... Our best effort came from: ORR R0, R1, LSL #24 ORR R0, R2, LSL #8 ORR R0, R3, LSR #8 ORR R0, R4, LSR #24 which produced 0xBBBBCCDD for some odd reason; we really don't know. Any hints would be greatly appreciated. Again, we are asking for help, but not for a solution. Cheers!

    Read the article

  • Rails redirections with new users and logins

    - by Kenji Crosland
    So I'm trying to get the user to return to the page they were looking at before they click "log in" This is what I got in my user application controller: def redirect_back_or_default(default) redirect_to(session[:return_to] || default) session[:return_to] = nil end And this is what I have in my sessions controller: def new @user_session = UserSession.new session[:return_to] = request.referer end end def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" redirect_back_or_default(home_path) else render :action => :new end end This works fine most of the time but if a user logs in right after they register to the site, they will get redirected to a blank page. I imagine this is the "create" action because it was the last action before going to user sessions new. So I tried this: def new @user_session = UserSession.new unless request.referer == join_path session[:return_to] = request.referer end end And this tries to take me back to the login page after I log in. What I'd really like to do is have the user see their profile when they log in for the very first time. This wouldn't give me a user id and raised a routing error def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" redirect_back_or_default(user_path(current_user)) else render :action => :new end end Anybody gone through these redirecting acrobatics before? I can't seem to get it to work. I'm using authlogic if that helps.

    Read the article

  • Skipping the BufferedReader readLine() method in java

    - by DDP
    Is there a easy way to skip the readLine() method in java if it takes longer than, say, 2 seconds? Here's the context in which I'm asking this question: public void run() { boolean looping = true; while(looping) { for(int x = 0; x<clientList.size(); x++) { try { Comm s = clientList.get(x); String str = s.recieve(); // code that does something based on the string in the line above } // other stuff like catch methods } } } Comm is a class I wrote, and the receive method, which contains a BufferedReader called "in", is this: public String recieve() { try { if(active) return in.readLine(); } catch(Exception e) { System.out.println("Comm Error 2: "+e); } return ""; } I've noticed that the program stops and waits for the input stream to have something to read before continuing. Which is bad, because I need the program to keep looping (as it loops, it goes to all the other clients and asks for input). Is there a way to skip the readLine() process if there's nothing to read? I'm also pretty sure that I'm not explaining this well, so please ask me questions if I'm being confusing.

    Read the article

  • [PERL Tk] printing Line number in Text widget

    - by ungalnanban
    I use the following code for printing the line number in Text widget. my $c=0; my $r=0; $txt = $mw-Text( -background ='white', -width=>400, -height=>300, -selectbackground => 'skyblue', -insertwidth => 5, -borderwidth =>3, -highlightcolor => 'blue', ### after visit -highlightbackground => 'red' , ### default before visit -xscrollcommand => sub { print"CHAT NO :",$c++; }, # Determines the callback used when the Text widget is scrolled horizontally. -yscrollcommand = sub { print"LINR NO:",$r++; }, # Determines the callback used when the Text widget is scrolled vertically. -padx = 5, -pady = 5, )- pack (); the above code is printing the line number and character no is ok. but I used in Scrolled widget that output is not printing. what is the problem in the following code how can I solve this? $txt = $mw-Scrolled('Text', -scrollbars = 'se', -background ='white', -width=>400, -height=>300, -insertwidth => 5, -borderwidth =>3, -highlightcolor => 'blue', ### after visit -highlightbackground => 'red' , ### default before visit -padx => 5, -pady => 5, -xscrollcommand => sub { print"CHAT NO :",$c++; }, # Determines the callback used when the Text widget is scrolled horizontally. -yscrollcommand => sub { print"LINR NO :",$r++; }, # Determines the callback used when the Text widget is scrolled vertically. )->pack();

    Read the article

  • process semaphores linux - wait

    - by coubeatczech
    Hi, I try to code a simple program that starts and waits on the system semaphore until it gets terminated by signal. union semun { int val; struct semid_ds *buf; unsigned short int *array; struct seminfo *__buf; }; int main(){ int semaphores = semget(IPC_PRIVATE,1,IPC_CREAT | 0666); union semun arg; arg.val = 0; semctl(semaphores,0,SETVAL,arg); struct sembuf operations[1]; operations[0].sem_num = 0; operations[0].sem_op = -1; operations[0].sem_flg = 0; semop(semaphores,operations,1); fprintf(stderr,"Why?\n"); return 0; } I expect, that everytime this program gets executed, nothing actually happens and it waits on the semaphore, but everytime it goes through the semaphore and writes Why?. Why?

    Read the article

  • C ] How can I handle weird errors from calculating acos / sin / atan2?

    - by Phrixus
    Has anyone seen this weird value while handling sin / cos/ tan / acos.. math stuff? ===THE WEIRD VALUE=== -1.#IND00 ===================== void inverse_pos(double x, double y, double& theta_one, double& theta_two) { // Assume that L1 = 350 and L2 = 250 double B = sqrt(x*x + y*y); double angle_beta = atan2(y, x); double angle_alpha = acos((L2*L2 - B*B - L1*L1) / (-2*B*L1)); theta_one = angle_beta + angle_alpha; theta_two = atan2((y-L1*sin(theta_one)), (x-L1*cos(theta_one))); } This is the code I was working on. In a particular condition - like when x & y are 10 & 10, this code stores -1.#IND00 into theta_one & theta_two. It doesn't look like either characters or numbers :( Without a doubt, atan2 / acos / stuff are the problems. But the problem is, try and catch doesn't work either cuz those double variables have successfully stored some values in them. Moreover, the following calculations never complain about it and never break the program! I'm thinking of forcing to use this value somehow and make the entire program crash... So that I can catch this error.. Except for that idea, I have no idea how I should check whether these theta_one and theta_two variables have stored this crazy values. Any good ideas? Thank you in advance..

    Read the article

  • Where do I put javaassist code?

    - by DutrowLLC
    I have an application running on google app engine. I'm using restlets and I have a couple of layers set up including the restlet layer, the model layer, the business layer, and the data layer. I'm attempting to use javaassist to modify some classes, but I'm unsure where to actually put the code. I tried to put the code in the static initialization block: public class Person { String firstName; String getFirstName(){return null;} static{ ClassPool pool = ClassPool.getDefault(); try { CtClass CtPerson = pool.get("Person"); CtMethod CtGetFirstName = CtPerson.getDeclaredMethod("GetFirstName"); CtGetFirstName.setBody("return firstName;"); CtPerson.toClass(); } catch (Exception e) { e.printStackTrace(); } } } ...but that resulted in this error: javassist.CannotCompileException:.....attempted duplicate class definition...". I guess it makes sense that I can't edit the class file in the middle of its generation. I know the code works because I was able to run it correctly by simply putting it in a location that would run when I sent the program a command. (accessed a Restlet resource). The code ran fine if an instance of the class had not already been instantiated, however once I instantiated an instance of the affected class, the javaassist code failed. I assume I need to put this code somewhere that it will only run either: once after the program starts, directly before a class is instantiated for the first time, or even better, during compile time.

    Read the article

  • Connecting an overloaded PyQT signal using new-style syntax

    - by Claudio
    I am designing a custom widget which is basically a QGroupBox holding a configurable number of QCheckBox buttons, where each one of them should control a particular bit in a bitmask represented by a QBitArray. In order to do that, I added the QCheckBox instances to a QButtonGroup, with each button given an integer ID: def populate(self, num_bits, parent = None): """ Adds check boxes to the GroupBox according to the bitmask size """ self.bitArray.resize(num_bits) layout = QHBoxLayout() for i in range(num_bits): cb = QCheckBox() cb.setText(QString.number(i)) self.buttonGroup.addButton(cb, i) layout.addWidget(cb) self.setLayout(layout) Then, each time a user would click on a checkbox contained in self.buttonGroup, I'd like self.bitArray to be notified so I can set/unset the corresponding bit in the array. For that I intended to connect QButtonGroup's buttonClicked(int) signal to QBitArray's toggleBit(int) method and, to be as pythonic as possible, I wanted to use new-style signals syntax, so I tried this: self.buttonGroup.buttonClicked.connect(self.bitArray.toggleBit) The problem is that buttonClicked is an overloaded signal, so there is also the buttonClicked(QAbstractButton*) signature. In fact, when the program is executing I get this error when I click a check box: The debugged program raised the exception unhandled TypeError "QBitArray.toggleBit(int): argument 1 has unexpected type 'QCheckBox'" which clearly shows the toggleBit method received the buttonClicked(QAbstractButton*) signal instead of the buttonClicked(int) one. So, the question is, how can we specify, using new-style syntax, that self.buttonGroup emits the buttonClicked(int) signal instead of the default overload - buttonClicked(QAbstractButton*)?

    Read the article

  • Does fast typing influence fast programming? [closed]

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • Finding character in String in Vector.

    - by SoulBeaver
    Judging from the title, I kinda did my program in a fairly complicated way. BUT! I might as well ask anyway xD This is a simple program I did in response to question 3-3 of Accelerated C++, which is an awesome book in my opinion. I created a vector: vector<string> countEm; That accepts all valid strings. Therefore, I have a vector that contains elements of strings. Next, I created a function int toLowerWords( vector<string> &vec ) { for( int loop = 0; loop < vec.size(); loop++ ) transform( vec[loop].begin(), vec[loop].end(), vec[loop].begin(), ::tolower ); that splits the input into all lowercase characters for easier counting. So far, so good. I created a third and final function to actually count the words, and that's where I'm stuck. int counter( vector<string> &vec ) { for( int loop = 0; loop < vec.size(); loop++ ) for( int secLoop = 0; secLoop < vec[loop].size(); secLoop++ ) { if( vec[loop][secLoop] == ' ' ) That just looks ridiculous. Using a two-dimensional array to call on the characters of the vector until I find a space. Ridiculous. I don't believe that this is an elegant or even viable solution. If it was a viable solution, I would then backtrack from the space and copy all characters I've found in a separate vector and count those. My question then is. How can I dissect a vector of strings into separate words so that I can actually count them? I thought about using strchr, but it didn't give me any epiphanies.

    Read the article

  • Implementing a scrabble trainer

    - by bstullkid
    Hello, I've recently been playing alot of online scrabble so I decided to make a program that quickly searches through a dictionary of 200,000+ words with an input of up to any 26 letters. My first attempt was fail as it took a while when you input 8 or more letters (just a basic look through dictionary and cancel out a letter if its found kind of thing), so I made a tree like structure containing only an array of 26 of the same structure and a flag to indicate the end of a word, doing that It can output all possible words in under a second even with an input of 26 characters. But it seems that when I input 12 or more letters with some of the same characters repeated i get duplicates; can anyone see why I would be getting duplicates with this code? (ill post my program at the bottom) Also, the next step once the duplicates are weeded out is to actually be able to input the letters on the game board and then have it calculate the best word you can make on a given board. I am having trouble trying to figure out a good algorithm that can analyze a scrabble board and an input of letters and output a result; the possible words that could be made I have no problem with but actually checking a board efficiently (ie can this word fit here, or here etc... without creating a non dictionary word in the process on some other string of letters) Anyone have a idea for an approach at that? (given a scrabble board, and an input of 7 letters, find all possible valid words or word sets that you can make) lol crap i forgot to email myself the code from my other computer thats in another state... ill post it on monday when I get back there! btw the dictionary im using is sowpods (http://www.calvin.edu/~rpruim/scrabble/ospd3.txt)

    Read the article

  • Problem with close socket

    - by zp26
    Hi, I have a problem with my socket program. I create the client program (my code is below) I have a problem when i close the socket with the disconnect method. Can i help me? Thanks and sorry for my English XP CFSocketRef s; -(void)CreaConnessione { CFSocketError errore; struct sockaddr_in sin; CFDataRef address; CFRunLoopSourceRef source; CFSocketContext context = { 0, self, NULL, NULL, NULL }; s = CFSocketCreate( NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP, kCFSocketDataCallBack, AcceptDataCallback, &context); memset(&sin, 0, sizeof(sin)); int port = [fieldPorta.text intValue]; NSString *tempIp = fieldIndirizzo.text; const char *ip = [tempIp UTF8String]; sin.sin_family = AF_INET; sin.sin_port = htons(port); sin.sin_addr.s_addr = (long)inet_addr(ip); address = CFDataCreate(NULL, (UInt8 *)&sin, sizeof(sin)); errore = CFSocketConnectToAddress(s, address, 0); if(errore == 0){ buttonInvioMess.enabled = TRUE; fieldMessaggioInvio.enabled = TRUE; labelTemp.text = [NSString stringWithFormat:@"Connesso al Server"]; CFRelease(address); source = CFSocketCreateRunLoopSource(NULL, s, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), source, kCFRunLoopDefaultMode); CFRelease(source); CFRunLoopRun(); } else{ labelTemp.text = [NSString stringWithFormat:@"Errore di connessione. Verificare Ip e Porta"]; switchConnection.on = FALSE; } } //the socket doesn't disconnect -(void)Disconnetti{ CFSocketInvalidate(s); CFRelease(s); } -(IBAction)Connetti { if(switchConnection.on) [self CreaConnessione]; else [self Disconnetti]; }

    Read the article

  • alternative to #include within namespace { } block

    - by Jeff
    Edit: I know that method 1 is essentially invalid and will probably use method 2, but I'm looking for the best hack or a better solution to mitigate rampant, mutable namespace proliferation. I have multiple class or method definitions in one namespace that have different dependencies, and would like to use the fewest namespace blocks or explicit scopings possible but while grouping #include directives with the definitions that require them as best as possible. I've never seen any indication that any preprocessor could be told to exclude namespace {} scoping from #include contents, but I'm here to ask if something similar to this is possible: (see bottom for explanation of why I want something dead simple) // NOTE: apple.h, etc., contents are *NOT* intended to be in namespace Foo! // would prefer something most this: namespace Foo { #include "apple.h" B *A::blah(B const *x) { /* ... */ } #include "banana.h" int B::whatever(C const &var) { /* ... */ } #include "blueberry.h" void B::something() { /* ... */ } } // namespace Foo ... // over this: #include "apple.h" #include "banana.h" #include "blueberry.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } int B::whatever(C const &var) { /* ... */ } void B::something() { /* ... */ } } // namespace Foo ... // or over this: #include "apple.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } } // namespace Foo #include "banana.h" namespace Foo { int B::whatever(C const &var) { /* ... */ } } // namespace Foo #include "blueberry.h" namespace Foo { void B::something() { /* ... */ } } // namespace Foo My real problem is that I have projects where a module may need to be branched but have coexisting components from the branches in the same program. I have classes like FooA, etc., that I've called Foo::A in the hopes being able to branch less painfully as Foo::v1_2::A, where some program may need both a Foo::A and a Foo::v1_2::A. I'd like "Foo" or "Foo::v1_2" to show up only really once per file, as a single namespace block, if possible. Moreover, I tend to prefer to locate blocks of #include directives immediately above the first definition in the file that requires them. What's my best choice, or alternatively, what should I be doing instead of hijacking the namespaces?

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >