Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 325/880 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Could this C cast to avoid a signed/unsigned comparison make any sense?

    - by sharptooth
    I'm reviewing a C++ project and see effectively the following: std::vector<SomeType> objects; //then later int size = (int)objects.size(); for( int i = 0; i < size; ++i ) { process( objects[i] ); } Here's what I see. std::vector::size() returns size_t that can be of some size not related to the size of int. Even if sizeof(int) == sizeof(size_t) int is signed and can't hold all possible values of size_t. So the code above could only process the lower part of a very long vector and contains a bug. That said I'm curious of why the author might have written this? My only guess is that first he omitted the (int) cast and the compiler emitted something like Visual C++ C4018 warning: warning C4018: '<' : signed/unsigned mismatch so the author though that the best way to avoid the compiler warning would be to simply cast the size_t to int thus making the compiler shut up. Is there any other possible sane reason for that C cast?

    Read the article

  • Recommendation for a Pagination procedure AJAX PHP

    - by Jamex
    Hi, I am not sure the correct terminology for the process that I am trying to describe. I don't even know which platform is underlying the technique. If you understand my description, please give the link to the site(s) and or the keyword name of the process. I think it is done by AJAX, but I am not certain. I use php as the backend code, I just need to find a way to dynamically display the results. Please give suggestions. I forgot the name of the sites that use this, and my link history expired. TIA Description: The page would have a search form and options. After the user submits, the search is initiated, and the results appear inside the dedicated result area. The page does not refresh, just the info inside the result area. The display area will show 20 (or whatever) results (lines). There will be next, and previous buttons. If you hit next, the next set of results will display. I am writing a code that generates 20 results for each display. There is no set number of results, so the results might have a start/first page, but do not have an end page. Each time the user hits 'next', the program would generate/load new results. It would also store previous results, so that when a user hits 'prev', the previous results can instantly come up. What techniques/program are theses?

    Read the article

  • Strange 3-second tcp connection latencies (Linux, HTTP)

    - by user25417
    Our webservers with static content are experiencing strange 3 second latencies occasionally. Typically, an ApacheBench run ( 10000 requests, concurrency 1 or 40, no difference, but keepalive off) looks like this: Connection Times (ms) min mean[+/-sd] median max Connect: 2 10 152.8 3 3015 Processing: 2 8 34.7 3 663 Waiting: 2 8 34.7 3 663 Total: 4 19 157.2 6 3222 Percentage of the requests served within a certain time (ms) 50% 6 66% 7 75% 7 80% 7 90% 9 95% 11 98% 223 99% 225 100% 3222 (longest request) I have tried many things: - Apache2 2.2.9 with worker or prefork MPM, no difference (with KeepAliveTimeout 10-15) - Nginx 0.6.32 - various tcp parameters (net.core.somaxconn=3000, net.ipv4.tcp_sack=0, net.ipv4.tcp_dsack=0) - putting the files/DocumentRoot on tmpfs - shorewall on or off (i.e. empty iptables or not) - AllowOverride None is on for /, so no .htaccess checks (verified with strace) - the problem persists whether the webservers are accessed directly or through a Foundry load balancer Kernel is 2.6.32 (Debian Lenny backports), but it occurred with 2.6.26 also. IPv6 is enabled, but not used. Does the issue look familiar to anyone? Help/suggestions are much appreciated. It sounds a bit like a SYN,ACK packet getting lost or ignored.

    Read the article

  • How to optimize my game calendar in C#?

    - by MartyIX
    Hi, I've implemented a simple calendar (message system) for my game which consists from: 1) List<Event> calendar; 2) public class Event { /// <summary> /// When to process the event /// </summary> public Int64 when; /// <summary> /// Which object should process the event /// </summary> public GameObject who; /// <summary> /// Type of event /// </summary> public EventType what; public int posX; public int posY; public int EventID; } 3) calendar.Add(new Event(...)) The problem with this code is that even thought the number of messages is not excessise per second. It allocates still new memory and GC will once need to take care of that. The garbage collection may lead to a slight lag in my game and therefore I'd like to optimalize my code. My considerations: To change Event class in a structure - but the structure is not entirely small and it takes some time to copy it wherever I need it. Reuse Event object somehow (add queue with used events and when new event is needed I'll just take from this queue). Does anybody has other idea how to solve the problem? Thanks for suggestions!

    Read the article

  • Overlay an image over video using OpenGL ES shaders

    - by BlueVoodoo
    I am trying to understand the basic concepts of OpenGL. A week into it, I am still far from there. Once I am in glsl, I know what to do but I find getting there is the tricky bit. I am currently able to pass in video pixels which I manipulate and present. I have then been trying to add still image as an overlay. This is where I get lost. My end goal is to end up in the same fragment shader with pixel data from both my video and my still image. I imagine this means I need two textures and pass on two pixel buffers. I am currently passing the video pixels like this: glGenTextures(1, &textures[0]); //target, texture glBindTexture(GL_TEXTURE_2D, textures[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, buffer); Would I then repeat this process on textures[1] with the second buffer from the image? If so, do I then bind both GL_TEXTURE0 and GL_TEXTURE1? ...and would my shader look something like this? uniform sampler2D videoData; uniform sampler2D imageData; once I am in the shader? It seems no matter what combination I try, image and video always ends up being just video data in both these. Sorry for the many questions merged in here, just want to clear my many assumptions and move on. To clarify the question a bit, what do I need to do to add pixels from a still image in the process described? ("easy to understand" sample code or any types of hints would be appreciated).

    Read the article

  • Setting up a git repository on a server

    - by lostInTransit
    Hi I had posted this question on superuser but didn't get a helpful response. Thought I'd try here since the question does deal with some configurations and settings for using github. I have a central server with SSO installed. All my machines are connected through the lan to this server. I have also setup a remote git repository on this server. Now what I'd like to do is make the server act as a central repository. All my employees can commit their code to the server and the server pushes it to the remote git repository. Can someone please help me out with this process? I am new to git and still learning how to use it effectively. So a step-by-step process or an existing document which I can refer to for this? Also can I integrate it with SSO in any way? The server itself is setup on a Mac and SSO uses Atlassian Crowd. Thanks.

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • What type of data store should I use for my ios app?

    - by mwiederrecht
    I am pretty new to ios and using servers so forgive me. I am building an ios app for research. I need to monitor things that the user does and then push it up to a server for analysis (yes, with user and IRB permission). On the client's side I need to keep quite a bit of data that won't really change except in the case of pulling an updated version from the server, and then a minimal amount of user-specific data. Most of the data I will collect needs to be pushed to a server for analysis and then can be deleted from the client side. I am struggling to figure out what kind of data store I need to use, especially since I am not quite sure how the pushing and pulling from the server process works yet. Does it make sense to use Core Data? XML? SQLite? I like the Core Data idea, but I am not sure what kind of problems I will run into when I need to send large amounts of data to it and from it from the server. I imagine I might need to send data in a different form than it is probably stored in on either end - so what kind of overhead am I likely to run into in the process of converting that data? Is there a good format to save stuff in that would work well for me on both ends AND for sending the data? As you can probably tell, I could use some advice. Thanks!

    Read the article

  • Receiving "expected expression before" Error When Using A Struct

    - by Zach Dziura
    I'm in the process of creating a simple 2D game engine in C with a group of friends at school. I'd like to write this engine in an Object-Oriented way, using structs as classes, function pointers as methods, etc. To emulate standard OOP syntax, I created a create() function which allocates space in memory for the object. I'm in the process of testing it out, and I'm receiving an error. Here is my code for two files that I'm using to test: test.c: #include <stdio.h> int main() { typedef struct { int i; } Class; Class *test = (Class*) create(Class); test->i = 1; printf("The value of \"test\" is: %i\n", test->i); return 0; } utils.c: #include <stdio.h> #include <stdlib.h> #include "utils.h" void* create(const void* class) { void *obj = (void*) malloc(sizeof(class)); if (obj == 0) { printf("Error allocating memory.\n"); return (int*) -1; } else { return obj; } } void destroy(void* object) { free(object); } The utils.h file simply holds prototypes for the create() and destroy() functions. When I execute gcc test.c utils.c -o test, I'm receiving this error message: test.c: In function 'main': test.c:10:32: error: expected expression before 'Class' I know it has something to do with my typedef at the beginning, and how I'm probably not using proper syntax. But I have no idea what that proper syntax is. Can anyone help?

    Read the article

  • VB.NET Program Locks Up with Internet Explorer Opened

    - by aaronsj
    I'm using Visual Studio 2008 and developing a VB.NET application. I'm having strange lockup problems with my program, but only when Internet explorer 8 is opened. When I cover my form with another window and then uncover it, I find that it has locked up. My program has no references to IE and the only thing it even has to do with IE is using Process.Start with a web address. My program works fine and exactly as it should, but only when IE is not opened. Does anyone know why a program would lock up only while IE is running? Edit: I've done some digging and I've found the offending thread in my program. I don't know what starts this thread or what it does, but when I kill it, my program no longer freezes. The thread is one of the CreateApplicationContext threads, here is the last few items in the stack trace of that thread. 6 ntkrnlpa.exe+0x897bc 7 ntdll.dll!KiFastSystemCallRet 8 mscorwrks.dll!LogHelp_TerminateOnAssert+0x61 9 mscorwrks.dll!DllUnregisterServerInternal+0x10523 10 mscorwrks.dll!DllUnregisterServerInternal+0x10542 11 mscorwrks.dll!StrongNameErrorInfo+0x34387 12 mscorwrks.dll!StrongNameErrorInfo+0x34815 13 mscorwrks.dll!CreateApplicationContext+0xbc35 14 KERNEL32.dll!GetModuleHandleA+0xdf Process explorer says my program is using no CPU nor throwing any exceptions while it is hung.

    Read the article

  • R from java with no graphics: is it worth moving to JRI

    - by LH
    I have a system set up that's been happily running R from a java servlet, spawning processed & hooking into the process's stdin, stdout, and stderr streams, as in the second andwer to this question. After a system upgrade (that included glibc), the input is no longer reaching the R process.* Until now, 'R --vanilla --slave -f [file] ...' was working fine for me. I also have no swing dependencies right now, so I'm somewhat reluctant to add them. (I may actually not be able to add swing dependencies; am I right that using REngine automatically brings swing in? The examples import all of swing.) Are there advantages to switching to JRI? What changes would I need to make to my R script? (It currently reads from stdin and writes to stdout). I'm not finding the provided examples terribly helpful for how to use JRI in this situation. Thanks for your help & comments. *I can't even tell if the problem is data being written too soon or too late, but that's a separate issue/question; if I move to JRI I'm hoping it all becomes moot.

    Read the article

  • Maven exec bash script and save output as property

    - by djechlin
    I'm wondering if there exists a Maven plugin that runs a bash script and saves the results of it into a property. My actual use case is to get the git source version. I found one plugin available online but it didn't look well tested, and it occurred to me that a plugin as simple as the one in the title of this post is all I need. Plugin would look something like: <plugin>maven-run-script-plugin> <phase>process-resources</phase> <!-- not sure where most intelligent --> <configuration> <script>"git rev-parse HEAD"</script> <!-- must run from build directory --> <targetProperty>"properties.gitVersion"</targetProperty> </configuration> </plugin> Of course necessary to make sure this happens before the property will be needed, and in my case I want to use this property to process a source file.

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • Hourly SQL Server 2005 Slowness (Possibly caused by SYSTEM)

    - by Zorlack
    We're trying to diagnose the cause of slowness on our Database server. We're running the latest rev SQL Server 2005 on Windows 2008x64. The behavior that we're seeing is this: We see the SYSTEM process spike one of the CPUs for about 2 minutes, during this time SQL server slows down by a factor of 10. The slowness lasts until SYSTEM is done, then in an hour everything starts again. During these slowdowns disk writes don't spike, paging doesn't spike, the only noticeable precursor we see is that SYSTEM maxes out one of the sixteen (HT)CPUs. Note that this doesn't happen at the top of the hour, it just happens once an hour, and it shifts a bit depending on the length of the incident. At the moment this is causing intermittent slowdowns, but when the server is really busy it can cause Worker Thread starvation. The server is a Dual Quad Dell R710 with 96GB of RAM and RAID10 data/log disks. Has anyone experienced this kind of problem? Does anyone know where we should look? Edit: SQL Server Version is 9.0.4035

    Read the article

  • Batch file recursively find files and rar them

    - by b1gf00t
    Hi there, I have a Parent Directory which hosts many sub directories, and in every sub directory there is .mpg movies. Some of the directories might contain one or more .mpg movies. I would like to automate the process below, which I have been doing manually. Step One If the directory has more than 1 .mpg file, I create separates directories for each and move each file into its directory, naming the directory as per the name of the file. Step Two I rar each video file in its directory as per one of my profiles, by that it splits the movie into 50MB parts, test the archive, delete the source, and instructs winrar to wait if another rar is executing. I am doing this so I can queue jobs manually. Step Three After having all the rars in the sub directories, I start creating a checksum for every directory, therefore leaving checksum.sfv in every directory. Step Four I copy the parent folder and its sub directories to my external drives. I was hoping that someone could assist me in creating a script. I was able to automate the process of creating directories as per the name of the file, and moving the file. However, I never succeeded in automating Step two. I am using the below software Winrar from rarlabs exf from exactfile Appreciate your assistance.

    Read the article

  • PHP download script for "processes running limited" hosting (eg. hostgator)

    - by Joe
    I am currently with HostGator on a shared hosting plan. I have a new website I'm trying to setup with a download.php script. The issue I am having is that, while someone is "downloading" a file through the download.php script, it counts as a "process" and my hosting plan limits the processes that can be running at the same time to 25 at present. My question is, what options do I have? a). Move to new web hosting that doesn't limit processes running. b). Change the way files are downloaded. I would like to choose option b), however it occurs to me that I need to have the file accessed through PHP in order to restrict the number of downloads and to track download statistics, as well as protecting against hotlinking. If there was a way to have the PHP script send them the file so that the process doesn't need to be running the whole time the file is being downloaded, I would eliminate the problem, however to my knowledge that isn't possible. Should I make the move to a new hosting company? I really enjoy HostGator as they have provided the best hosting experience for me thus far, except for this one issue of course, so I don't want to go on the hunt for another decent shared hosting company that doesn't limit processes running, only to find out there is another restriction or "catch" to the shared hosting deal.

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

  • Is it possible to get a truly unique id for a particular JVM instance?

    - by Uri
    I need a way to uniquely and permanently identify an instance of the JVM from within Java code running in that JVM. That is, if I have two JVMs running at the same time on the same machine, each is distinguishable. It is also distinguishable from running JVMs on other machines and from future executions on the same machine even if the process id is reused. I figure I could implement something like this by identifying the start time, the machine MAC, and the process id, and combining them in some way. I'm wondering if there is some standard way to achieve this. Update: I see that everyone recommended a UUID for the entire session. That seems like a good idea though possibly a little too heavyweight. Here is my problem though: I want to use the JVM id to create multiple unique identifiers in each JVM execution that somehow incorporate the JVM instance. My understanding is that you shouldn't really mix other numbers into a UUID because uniqueness is no longer guaranteed. An alternative is to make the UUID into a string and chain it, but then it becomes too long. Any ideas on overcoming this?

    Read the article

  • Java redirected system output to jtext area, doesnt update until calculation is finished

    - by user1806716
    I have code that redirects system output to a jtext area, but that jtextarea doesnt update until the code is finished running. How do I modify the code to make the jtextarea update in real time during runtime? private void updateTextArea(final String text) { SwingUtilities.invokeLater(new Runnable() { public void run() { consoleTextAreaInner.append(text); } }); } private void redirectSystemStreams() { OutputStream out = new OutputStream() { @Override public void write(int b) throws IOException { updateTextArea(String.valueOf((char) b)); } @Override public void write(byte[] b, int off, int len) throws IOException { updateTextArea(new String(b, off, len)); } @Override public void write(byte[] b) throws IOException { write(b, 0, b.length); } }; System.setOut(new PrintStream(out, true)); System.setErr(new PrintStream(out, true)); } The rest of the code is mainly just an actionlistener for a button: private void updateButtonActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: String shopRoot = this.shopRootDirTxtField.getText(); String updZipPath = this.updateZipTextField.getText(); this.mainUpdater = new ShopUpdater(new File(shopRoot), updZipPath); this.mainUpdater.update(); } That update() method begins the process of copying+pasting files on the file system and during that process uses system.out.println to provide an up-to-date status on where the program is currently at in reference to how many more files it has to copy.

    Read the article

  • Alert Box Methods

    - by Vecta
    I just need a little advice on what may be the best method for handling my situation. I'm in need of placing three buttons in the sidebar of the website I maintain. The website is massive and hard to handle. Currently, it's all HTML files (there are over 10,000 of them believe it or not). We're transitioning to a database website so I don't want to make any sweeping changes that are site-wide, as they may just be scrapped in our re-design process in coming months. However, these buttons are for an application process. When you click on them, an alert box will need to pop up to give you a bit of information and they either allow you to cancel the action or proceed. The buttons are currently located in the left nav which is included on every page of the website. Would it be possible to accomplish this using JS or jQuery? I'd be unable to easily add scripts into the tags on all of the applicable pages, but I'd like to avoid the browser driven "http://www...Says: blah blah" message if possible. Any insight is greatly appreciated! Thank you!

    Read the article

  • Parsing string based on initial format

    - by Kayla
    I'm trying to parse a set of lines and extract certain parts of the string based on an initial format (reading a configuration file). A little more explanation: the format can contain up to 4 parts to be formatted. This case, %S will skip the part, %a-%c will extract the part and will be treated as a string, %d as int. What I am trying to do now is to come up with some clever way to parse it. So far I came up with the following prototype. However, my pointer arithmetic still needs some work to skip/extract the parts. Ultimately each part will be stored on an array of structs. Such as: struct st_temp { char *parta; char *partb; char *partc; char *partd; }; ... #include <stdio.h> #include <string.h> #define DIM(x) (sizeof(x)/sizeof(*(x))) void process (const char *fmt, const char *line) { char c; const char *src = fmt; while ((c = *src++) != '\0') { if (c == 'S'); // skip part else if (c == 'a'); // extract %a else if (c == 'b'); // extract %b else if (c == 'c'); // extract %c else if (c == 'd'); // extract %d (int) else { printf("Unknown format\n"); exit(1); } } } static const char *input[] = { "bar 200.1 / / (zaz) - \"bon 10\"", "foo 100.1 / / (baz) - \"apt 20\"", }; int main (void) { const char *fmt = "%S %a / / (%b) - \"%c %d\""; size_t i; for(i = 0; i < DIM (input); i++) { process (fmt, input[i]); } return (0); }

    Read the article

  • How to show and update popup in 1 thread

    - by user3713986
    I have 1 app. 2 Forms are MainFrm and PopupFrm, 1 thread to update some information to PopupFrm Now to update PopupFrm i use: In MainFrm.cs private PopupFrm mypop; MainFrm() { .... PopupFrm mypop= new PopupFrm(); mypop.Show(); } MyThread() { Process GetData();... mypop.Update(); ... } In PopupFrm.cs public void Update() { this.Invoke((MethodInvoker)delegate .... }); } Problem here that mypopup alway display when MainFrm display (Start application not when has data to update). So i change MainFrm.cs to : private PopupFrm mypop; private bool firstdisplay=false; MainFrm() { .... PopupFrm mypop= new PopupFrm(); //mypop.Show(); } MyThread() { Process GetData();... if(!firstdisplay) { mypop.Show(); firstdisplay=true; } mypop.Update(); ... } But it can not update Popup GUI. So how can i fix this issue ? Thanks all.

    Read the article

  • How to add a checkbox for each row in Rails 3.2 index page?

    - by user938363
    We would like to add a checkbox to each row on Rails index page to flag for the row. This checkbox is not part of the object (no checkbox boolean in database). When the index page shows, a user can check the box to trigger an event for the row in following process: #objects/checkbox_index.html.erb <table> <tr> <th>CheckBox</th> <th>Object Name</th> <th>Object ID</th> </tr> <%= @objects.each do |obj| %> <tr> <td><%= checkbox %></td> <td><%= obj.name %></td> <td><%= obj.id %></td> </tr> <% end %> </table> In controller, the process will be like this: @objects.each do |obj| some_event if obj.checked end There are a couple of questions we don't quite understand: 1. How to declare an array checkbox variable on the form and link it to each row of obj? We have been using `attr_accessor` to declare var for a form. 2. How to retrieve each row on checkbox_index form and pass them back to controller? We are using simple_form for new/edit. Can anyone point me towards any good examples of this sort of behavior, or suggest what we should be thinking about? Many Thanks.

    Read the article

  • Windows 2008 and wrong BPL loading [SOLVED]

    - by Beto Neto
    I have an application builded with Run-time Packages. When the executable starts it auto loads the required packages (.bpl). Recently we has installed an Windows 2008 R2 server to use as Terminal Services. We maintain some old compiled versions of our application in different paths, like this: c:\app\version_1\common.bpl c:\app\version_1\app.exe c:\app\version_2\common.bpl c:\app\version_2\app.exe Common.bpl is the a run-time package what app.exe depends on. THE PROBLEM: I start "c:\app\version_2\app.exe" and it loads "c:\app\version_2\common.bpl". When I start the "c:\app\version_1\app.exe" it loads the WRONG bpl (from version_2). The path "c:\app\version_2\" isn't at the system search path. At Windows2003 server this problem doesn't occurs. What can I do to solve this? Thanks! I have downloaded the Process Explorer (microsoft sysinternals), and checked the loaded modules of each executable, all they are correct! But I noticed another problem. After start the second version, an entry-not-found-error occurs, telling me what a initialization entry point, of an unit what only exists in one of the versions, could not be found. Something is very strange. The ProcessExplorer is telling me that the process is loading the correct modoles, but when they are running this seems not be happening. Seems the applications are sharing the loaded modules. SOLVED There was a MouseHook using FindVCLWindow, this was generating the AV. Sorry about inconvenience guys, and thanks!

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >