Search Results

Search found 9275 results on 371 pages for 'condition variables'.

Page 324/371 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Function Returning Negative Value

    - by Geowil
    I still have not run it through enough tests however for some reason, using certain non-negative values, this function will sometimes pass back a negative value. I have done a lot of manual testing in calculator with different values but I have yet to have it display this same behavior. I was wondering if someone would take a look at see if I am missing something. float calcPop(int popRand1, int popRand2, int popRand3, float pERand, float pSRand) { return ((((((23000 * popRand1) * popRand2) * pERand) * pSRand) * popRand3) / 8); } The variables are all contain randomly generated values: popRand1: between 1 and 30 popRand2: between 10 and 30 popRand3: between 50 and 100 pSRand: between 1 and 1000 pERand: between 1.0f and 5500.0f which is then multiplied by 0.001f before being passed to the function above Edit: Alright so after following the execution a bit more closely it is not the fault of this function directly. It produces an infinitely positive float which then flips negative when I use this code later on: pPMax = (int)pPStore; pPStore is a float that holds popCalc's return. So the question now is, how do I stop the formula from doing this? Testing even with very high values in Calculator has never displayed this behavior. Is there something in how the compiler processes the order of operations that is causing this or are my values simply just going too high? If the later I could just increase the division to 16 I think.

    Read the article

  • Setting Up a Wordpress Function in theme's function.php File

    - by user1609391
    I am trying to create the below function in my theme's function.php file and call it from my taxonomy.php file via query_brands_geo('dealers', 'publish', '1', $taxtype, $geo, $brands); all variables are set in taxonomy.php. The below query works perfect if I put it directly in my taxonomy.php file. What am I missing to make this work as a function? As a function I get this error statement for argument repeated for 2-6: Warning: Missing argument 2 for query_brands_geo() function query_brands_geo($posttype, $poststatus, $paidvalue, $taxtype, $geo, $brands) { /* Custom Query for a brand/geo combination to display dealers with a certain brand and geography */ //Query only for brands/geography combo and paid dealers $wp_query = new WP_Query(); $args = array( 'post_type' => '$posttype', 'post_status' => array($poststatus), 'orderby' => 'rand', 'posts_per_page' => 30, 'meta_query' => array( array( 'key' => 'wpcf-paid', 'value' => array($paidvalue), 'compare' => 'IN', ) ), 'tax_query' => array( 'relation' => 'AND', array( 'taxonomy' => $taxtype, 'field' => 'slug', 'terms' => $geo ), array( 'taxonomy' => 'brands', 'field' => 'slug', 'terms' => $brands ) ) ); $wp_query->query($args); } add_action( 'after_setup_theme', 'query_brands_geo' );

    Read the article

  • How to allow all except part 1 and part 2 ?

    - by Stackfan
    This allows me to get easyly dynamic input variables instead of putting a static prefix like /en/etcetcetc, but the problem is all controllers are blocked. Everything goes to index/index. Question: How can i tell this rule allow evertying as it is now, but do not track if it contains http://site.com/donotcatch/me and http://site.com/iamnotbelongstodynamic1/blabla protected function _initRoutes() { ... $dynamic1 = new Zend_Controller_Router_Route( '/:variable0/:variable1', array( 'controller' => 'index', 'action' => 'index'), array( 'variable0' => '^[a-zA-Z0-9_]*$', 'variable1' => '^[a-zA-Z0-9_]*$', ) ); Follow up: Normally, i always belive yes we can, so, we can do that like this where dynamic1 does not the inter-fare with my other static controllers: // http://site/yeswecan/blabla // variable0 = yeswecan // variable1 = blabla $dynamic1 = new Zend_Controller_Router_Route( '/:variable0/:variable1', array( 'controller' => 'index', 'action' => 'index'), array( 'variable0' => '^[a-zA-Z]*$', 'variable1' => '^[a-z0-9_]*$', ) ); // http://site/ajax/whatever... // solves it $dynamic2 = new Zend_Controller_Router_Route( '/ajax/:variable0', array( 'controller' => 'ajax', 'action' => '' ), array( 'variable0' => '^[a-zA-Z0-9_]*$', ) ); // http://site/order/whatever... // solves it $dynamic3 = new Zend_Controller_Router_Route( '/order/:variable0', array( 'controller' => 'order', 'action' => ''), array( 'variable0' => '^[a-zA-Z0-9_]*$', ) ); Note: Still the controllers are getting failed for example http://site/ajax/whatever always goes to /ajax/index where i wanted to send it as /ajax/user-inserted-value How can i fix the $dynamic2 and $dynamic3 by keeping $dynamic1 ??

    Read the article

  • Identical files from different servers. Why might IE 8 display them differently?

    - by jasongetsdown
    I'm working on a site that will go on my company's intranet. I developed it locally on my computer, checking it in different browsers and on colleague's computers, and when it was done I handed it off to IT. They put identical copies on a staging server, and on the production server. This is a site built only with html, javascript, and css. No server side scripting. It also uses a DWF viewer plugin from Autodesk. It is a single standalone page (not part of a CMS) that allows users to load drawings into the viewer and then click to see info from a database of space info saved in a series of js arrays (the space DB software spits out a js file with all the info listed in array literals, creating a crap ton of global variables - ugh, but I digress). When I followed their links (using IE 8) the version on the staging server looked as expected, but the layout is hosed on the version from the production server. Specifically, it seems like a div that is supposed to flow to the right of a div that is float: left is displaying below the floated div at full width, as though it was clear: left (which it is not). It also has the wrong height. I downloaded the files from each and they are identical to my local version. Frustrated, I cleared my browser's cache, restarted my computer, checked it on a colleague's computer who also has IE 8. All the same issue. Staging server good. Production server bad. Finally I uninstalled IE 8 and looked at it in IE 6. Both versions looked fine. So, to recap. Two different servers. No server side scripting. Identical files. One browser agrees they are identical, the other does not. What could cause this?

    Read the article

  • unexpected behaviour of object stored in web service Session

    - by draconis
    Hi. I'm using Session variables inside a web service to maintain state between successive method calls by an external application called QBWC. I set this up by decorating my web service methods with this attribute: [WebMethod(EnableSession = true)] I'm using the Session variable to store an instance of a custom object called QueueManager. The QueueManager has a property called ChangeQueue which looks like this: [Serializable] public class QueueManager { ... public Queue<QBChange> ChangeQueue { get; set; } ... where QBChange is a custom business object belonging to my web service. Now, every time I get a call to a method in my web service, I use this code to retrieve my QueueManager object and access my queue: QueueManager qm = (QueueManager)Session[ticket]; then I remove an object from the queue, using qm.dequeue() and then I save the modified query manager object (modified because it contains one less object in the queue) back to the Session variable, like so: Session[ticket] = qm; ready for the next web service method call using the same ticket. Now here's the thing: if I comment out this last line //Session[ticket] = qm; , then the web service behaves exactly the same way, reducing the size of the queue between method calls. Now why is that? The web service seems to be updating a class contained in serialized form in a Session variable without being asked to. Why would it do that? When I deserialize my Queuemanager object, does the qm variable hold a reference to the serialized object inside the Session[ticket] variable?? This seems very unlikely.

    Read the article

  • Need help with threads in a client/server

    - by nunos
    For college, I am developing a local relay chat. I have to program a chat server and client that will only work on sending messages on different terminal windows on the same computer with threads and fifos. The fifos part I am having no trouble, the threads part is the one that is giving me some headaches. The server has one thread for receiving commands from a fifo (used by all clients) and another thread for each client that is connected. For each client that is connected I need to know a certain information. Firstly, I was using global variables, which worked as longs as there was only one client connected, which is much of a chat, to chat alone. So, ideally I would have some data like: -nickname -name -email -etc... per client that is connected. However, I don't know how to do that. I could create a client_data[MAX_NUMBER_OF_THREADS] where client_data was a struct with everything I needed to have access to, but this would require to, in every communication between server and client to ask for the id of the client in the array client_data and that does not seem very pratical I could also instantiate a client_data immediately after creating the thread but it would only be available in that block, and that is not very pratical either. As you can see I am in need of a little guidance here. Any comment, piece of code or link to any relevant information is greatly appreciated. Thanks.

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • Gradual memory leak and slowdown in loop

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases, and the speed at which frames are processed drops precipitously. (From ~0.5 seconds per frame at the start, to ~2 seconds per frame by the 250th frame.) The only thing I can think can be causing the gradual memory leak is a buildup of the NSAutoreleasePool objects themselves. Am I right in thinking they will only be deallocated when the outer pool is released? If so, is there a better memory management solution here? Creating a pool every run through the loop seems a little hacky. And if not, what is causing the slow memory leak? (It is not NSStrings, and much too slow to be NSImages or NSDatas.) And what could be causing the slowdown?

    Read the article

  • Microsoft JScript runtime error Object doesn't support this property or method

    - by Darxval
    So i am trying to call this function in my javascript but it gives me the error of "Microsoft JScript runtime error Object doesn't support this property or method" and i cant figure out why. It is occuring when trying to call hmacObj.getHMAC. This is from the jsSHA website: http://jssha.sourceforge.net/ to use the hmac-sha1 algorithm encryption. Thank you! hmacObj = new jsSHA(signature_base_string,"HEX"); signature = hmacObj.getHMAC("hgkghk","HEX","SHA-1","HEX"); Above this i have copied the code from sha.js snippet: function jsSHA(srcString, inputFormat) { /* * Configurable variables. Defaults typically work */ jsSHA.charSize = 8; // Number of Bits Per character (8 for ASCII, 16 for Unicode) jsSHA.b64pad = ""; // base-64 pad character. "=" for strict RFC compliance jsSHA.hexCase = 0; // hex output format. 0 - lowercase; 1 - uppercase var sha1 = null; var sha224 = null; The function it is calling (inside of the jsSHA function) (snippet) this.getHMAC = function (key, inputFormat, variant, outputFormat) { var formatFunc = null; var keyToUse = null; var blockByteSize = null; var blockBitSize = null; var keyWithIPad = []; var keyWithOPad = []; var lastArrayIndex = null; var retVal = null; var keyBinLen = null; var hashBitSize = null; // Validate the output format selection switch (outputFormat) { case "HEX": formatFunc = binb2hex; break; case "B64": formatFunc = binb2b64; break; default: return "FORMAT NOT RECOGNIZED"; }

    Read the article

  • How does PHP interface with Apache?

    - by Sbm007
    Hi, I've almost finished writing a HTTP/1.0 compliant web server under Java (no commercial usage as such, this is just for fun) and basically I want to include PHP support. I realize that this is no easy task at all, but I think it'll be a nice accomplishment. So I want to know how PHP exactly interfaces with the Apache web server (or any other web server really), so I can learn from it and write my own PHP wrapper. It doesn't necessarily have to be mod_php, I don't mind writing a FastCGI wrapper - which to my knowledge is capable of running PHP as well. I would've thought that all that PHP needs is the output that goes to client (so it can interpret the PHP parts), the full HTTP request from client (so it can extract POST variables and such) and the client's host name. And then you simply take the parsed PHP code and write that to the output stream. There will probably be more things, but in essence that's how I would have thought it works. From what I've gathered so far, apache2handler provides an API which PHP makes use of to 'connect' to Apache. I guess it's an idea to look at the source code for apache2handler and php5apache2.dll or so, but before I do that I thought I'd ask SO first. If anyone has more information, experience, or some sort of specification that is relevant to this then please let me know. Thanks in advance!

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • Client Side Only Cookies

    - by Mike Jones
    I need something like a cookie, but I specifically don't want it going back to the server. I call it a "client side session cookie" but any reasonable mechanism would be great. Basically, I want to store some data encrypted on the server, and have the user type a password into the browser. The browser decrypts the data with the password (or creates and encrypts the data with the password) and the server stores only encrypted data. To keep the data secure on the server, the server should not store and should never receive the password. Ideally there should be a cookie session expiration to clean up. Of course I need it be available on multiple pages as the user walks through the web site. The best I can come up with is some sort of iframe mechanism to store the data in javascript variables, but that is ugly. Does anyone have any ideas how to implement something like this? FWIW, the platform is ASP.NET, but I don't suppose that matters. It needs to support a broad range of browsers, including mobile. In response to one answer below, let me clarify. My question is not how to achieve the crypto, that isn't a problem. The question is where to store the password so that it is persistent from page to page, but not beyond a session, and in such a way that the server doesn't see it.

    Read the article

  • The proper way to script periodically pulling a page from an https site

    - by DarthShader
    I want to create a command-line script for Cygwin/Bash that logs into a site, navigates to a specific page and compares it with the results of the last run. So far, I have it working with Lynx like so: ----snpipped, just setting variables---- echo "# Command logfile created by Lynx 2.8.5rel.5 (29 Oct 2005) ----snipped the recorded keystrokes------- key Right Arrow key p key Right Arrow key ^U" >> $tmp1 #p, right arrow initiate the page saving #"type" the filename inside the "where to save" dialog for i in $(seq 0 $((${#tmp2} - 1))) do echo "key ${tmp2:$i:1}" >> $tmp1 done #hit enter and quit echo "key ^J key y key q key y " >> $tmp1 lynx -accept_all_cookies -cmd_script=$tmp1 https://thewebpage.com/login diff $tmp2 $oldComp mv $tmp2 $oldComp It definitely does not feel "right": the cmd_script consists of relative user actions instead of specifying the exact link names and actions. So, if anything on the site ever changes, switches places, or a new link is added - I will have to re-create the actions. Also, I can't check for any errors so I can't abort the script if something goes wrong (login failed, etc) Another alternative I have been looking at is Mechanize with Ruby (as a note - I have 0 experience with Ruby). What would be the best way to improve or rewrite this?

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Inline function v. Macro in C -- What's the Overhead (Memory/Speed)?

    - by Jason R. Mick
    I searched Stack Overflow for the pros/cons of function-like macros v. inline functions. I found the following discussion: Pros and Cons of Different macro function / inline methods in C ...but it didn't answer my primary burning question. Namely, what is the overhead in c of using a macro function (with variables, possibly other function calls) v. an inline function, in terms of memory usage and execution speed? Are there any compiler-dependent differences in overhead? I have both icc and gcc at my disposal. My code snippet I'm modularizing is: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = AttractiveTerm * AttractiveTerm; EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); My reason for turning it into an inline function/macro is so I can drop it into a c file and then conditionally compile other similar, but slightly different functions/macros. e.g.: double AttractiveTerm = pow(SigmaSquared/RadialDistanceSquared,3); double RepulsiveTerm = pow(SigmaSquared/RadialDistanceSquared,9); EnergyContribution += 4 * Epsilon * (RepulsiveTerm - AttractiveTerm); (note the difference in the second line...) This function is a central one to my code and gets called thousands of times per step in my program and my program performs millions of steps. Thus I want to have the LEAST overhead possible, hence why I'm wasting time worrying about the overhead of inlining v. transforming the code into a macro. Based on the prior discussion I already realize other pros/cons (type independence and resulting errors from that) of macros... but what I want to know most, and don't currently know is the PERFORMANCE. I know some of you C veterans will have some great insight for me!!

    Read the article

  • Process is killed without a (obvious) reason and program stops working

    - by Krzysiek Gurniak
    Here's what my program is supposed to do: create 4 child processes: process 0 is reading 1 byte at a time from STDIN, then writing it into FIFO process 1 is reading this 1 byte from fifo and write its value as HEX into shared memory process 2 is reading HEX value from shared memory and writing it into pipe finally process 3 is reading from pipe and writing into STDOUT (in my case: terminal) I can't change communication channels. FIFO, then shared memory, then pipes are the only option. My problem: Program stops at random moments when some file is directed into stdin (for example:./program < /dev/urandom). Sometimes after writing 5 HEX values, sometimes after 100. Weird thing is that when it is working and in another terminal I write "pstree -c" there is 1 main process with 4 children processes (which is what I want), but when I write "pstree -c" after it stopped writing (but still runs) there are only 3 child processes. For some reason 1 is gone even though they all have while(1) in them.. I think I might have problem with synchronization here, but I am unable to spot it (I've tried for many hours). Here's the code: #include <unistd.h> #include <fcntl.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/shm.h> #include <sys/sem.h> #include <sys/types.h> #include <sys/wait.h> #include <sys/stat.h> #include <string.h> #include <signal.h> #define BUFSIZE 1 #define R 0 #define W 1 // processes ID pid_t p0, p1, p2, p3; // FIFO variables int fifo_fd; unsigned char bufor[BUFSIZE] = {}; unsigned char bufor1[BUFSIZE] = {}; // Shared memory variables key_t key; int shmid; char * tab; // zmienne do pipes int file_des[2]; char bufor_pipe[BUFSIZE*30] = {}; void proces0() { ssize_t n; while(1) { fifo_fd = open("/tmp/fifo",O_WRONLY); if(fifo_fd == -1) { perror("blad przy otwieraniu kolejki FIFO w p0\n"); exit(1); } n = read(STDIN_FILENO, bufor, BUFSIZE); if(n<0) { perror("read error w p0\n"); exit(1); } if(n > 0) { if(write(fifo_fd, bufor, n) != n) { perror("blad zapisu do kolejki fifo w p0\n"); exit(1); } memset(bufor, 0, n); // czyszczenie bufora } close(fifo_fd); } } void proces1() { ssize_t m, x; char wartosc_hex[30] = {}; while(1) { if(tab[0] == 0) { fifo_fd = open("/tmp/fifo", O_RDONLY); // otwiera plik typu fifo do odczytu if(fifo_fd == -1) { perror("blad przy otwieraniu kolejki FIFO w p1\n"); exit(1); } m = read(fifo_fd, bufor1, BUFSIZE); x = m; if(x < 0) { perror("read error p1\n"); exit(1); } if(x > 0) { // Konwersja na HEX if(bufor1[0] < 16) { if(bufor1[0] == 10) // gdy enter { sprintf(wartosc_hex, "0x0%X\n", bufor1[0]); } else { sprintf(wartosc_hex, "0x0%X ", bufor1[0]); } } else { sprintf(wartosc_hex, "0x%X ", bufor1[0]); } // poczekaj az pamiec bedzie pusta (gotowa do zapisu) strcpy(&tab[0], wartosc_hex); memset(bufor1, 0, sizeof(bufor1)); // czyszczenie bufora memset(wartosc_hex, 0, sizeof(wartosc_hex)); // przygotowanie tablicy na zapis wartosci hex x = 0; } close(fifo_fd); } } } void proces2() { close(file_des[0]); // zablokuj kanal do odczytu while(1) { if(tab[0] != 0) { if(write(file_des[1], tab, strlen(tab)) != strlen(tab)) { perror("blad write w p2"); exit(1); } // wyczysc pamiec dzielona by przyjac kolejny bajt memset(tab, 0, sizeof(tab)); } } } void proces3() { ssize_t n; close(file_des[1]); // zablokuj kanal do zapisu while(1) { if(tab[0] == 0) { if((n = read(file_des[0], bufor_pipe, sizeof(bufor_pipe))) > 0) { if(write(STDOUT_FILENO, bufor_pipe, n) != n) { perror("write error w proces3()"); exit(1); } memset(bufor_pipe, 0, sizeof(bufor_pipe)); } } } } int main(void) { key = 5678; int status; // Tworzenie plikow przechowujacych ID procesow int des_pid[2] = {}; char bufor_proces[50] = {}; mknod("pid0", S_IFREG | 0777, 0); mknod("pid1", S_IFREG | 0777, 0); mknod("pid2", S_IFREG | 0777, 0); mknod("pid3", S_IFREG | 0777, 0); // Tworzenie semaforow key_t klucz; klucz = ftok(".", 'a'); // na podstawie pliku i pojedynczego znaku id wyznacza klucz semafora if(klucz == -1) { perror("blad wyznaczania klucza semafora"); exit(1); } semafor = semget(klucz, 1, IPC_CREAT | 0777); // tworzy na podstawie klucza semafor. 1 - ilosc semaforow if(semafor == -1) { perror("blad przy tworzeniu semafora"); exit(1); } if(semctl(semafor, 0, SETVAL, 0) == -1) // ustawia poczatkowa wartosc semafora (klucz, numer w zbiorze od 0, polecenie, argument 0/1/2) { perror("blad przy ustawianiu wartosci poczatkowej semafora"); exit(1); } // Tworzenie lacza nazwanego FIFO if(access("/tmp/fifo", F_OK) == -1) // sprawdza czy plik istnieje, jesli nie - tworzy go { if(mkfifo("/tmp/fifo", 0777) != 0) { perror("blad tworzenia FIFO w main"); exit(1); } } // Tworzenie pamieci dzielonej // Lista pamieci wspoldzielonych, komenda "ipcs" // usuwanie pamieci wspoldzielonej, komenta "ipcrm -m ID_PAMIECI" shmid = shmget(key, (BUFSIZE*30), 0666 | IPC_CREAT); if(shmid == -1) { perror("shmget"); exit(1); } tab = (char *) shmat(shmid, NULL, 0); if(tab == (char *)(-1)) { perror("shmat"); exit(1); } memset(tab, 0, (BUFSIZE*30)); // Tworzenie lacza nienazwanego pipe if(pipe(file_des) == -1) { perror("pipe"); exit(1); } // Tworzenie procesow potomnych if(!(p0 = fork())) { des_pid[W] = open("pid0", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces0 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p0"); exit(1); } close(des_pid[W]); proces0(); } else if(p0 == -1) { perror("blad przy p0 fork w main"); exit(1); } else { if(!(p1 = fork())) { des_pid[W] = open("pid1", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces1 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p1"); exit(1); } close(des_pid[W]); proces1(); } else if(p1 == -1) { perror("blad przy p1 fork w main"); exit(1); } else { if(!(p2 = fork())) { des_pid[W] = open("pid2", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces2 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p2"); exit(1); } close(des_pid[W]); proces2(); } else if(p2 == -1) { perror("blad przy p2 fork w main"); exit(1); } else { if(!(p3 = fork())) { des_pid[W] = open("pid3", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces3 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p3"); exit(1); } close(des_pid[W]); proces3(); } else if(p3 == -1) { perror("blad przy p3 fork w main"); exit(1); } else { // proces macierzysty waitpid(p0, &status, 0); waitpid(p1, &status, 0); waitpid(p2, &status, 0); waitpid(p3, &status, 0); //wait(NULL); unlink("/tmp/fifo"); shmdt(tab); // odlaczenie pamieci dzielonej shmctl(shmid, IPC_RMID, NULL); // usuwanie pamieci wspoldzielonej printf("\nKONIEC PROGRAMU\n"); } } } } exit(0); }

    Read the article

  • efficient sort with custom comparison, but no callback function

    - by rob
    I have a need for an efficient sort that doesn't have a callback, but is as customizable as using qsort(). What I want is for it to work like an iterator, where it continuously calls into the sort API in a loop until it is done, doing the comparison in the loop rather than off in a callback function. This way the custom comparison is local to the calling function (and therefore has access to local variables, is potentially more efficient, etc). I have implemented this for an inefficient selection sort, but need it to be efficient, so prefer a quick sort derivative. Has anyone done anything like this? I tried to do it for quick sort, but trying to turn the algorithm inside out hurt my brain too much. Below is how it might look in use. // the array of data we are sorting MyData array[5000], *firstP, *secondP; // (assume data is filled in) Sorter sorter; // initialize sorter int result = sortInit (&sorter, array, 5000, (void **)&firstP, (void **)&secondP, sizeof(MyData)); // loop until complete while (sortIteration (&sorter, result) == 0) { // here's where we do the custom comparison...here we // just sort by member "value" but we could do anything result = firstP->value - secondP->value; }

    Read the article

  • Django throws 404 at generic views

    - by x0rg
    I'm trying to get the generic views for a date-based archive working in django. I defined the urls as described in a tutorial, but django returns a 404 error whenever I want to access an url with a variable (such as month or year) in it. It don't even produces a TemplateDoesNotExist-execption. Normal urls without variables work fine. Here's my urlconf: from django.conf.urls.defaults import * from zurichlive.zhl.models import Event info_dict = { 'queryset': Event.objects.all(), 'date_field': 'date', 'allow_future': 'True', } urlpatterns += patterns('django.views.generic.date_based', (r'events/(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, slug_field='slug',template_name='archive/detail.html')), (r'^events/(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/(?P<slug>[-w]+)/$', 'object_detail', dict(info_dict, template_name='archive/list.html')), (r'^events/(?P<year>d{4})/(?P<month>[a-z]{3})/(?P<day>w{1,2})/$','archive_day',dict(info_dict,template_name='archive/list.html')), (r'^events/(?P<year>d{4})/(?P<month>[a-z]{3})/$','archive_month', dict(info_dict, template_name='archive/list.html')), (r'^events/(?P<year>)/$','archive_year', dict(info_dict, template_name='archive/list.html')), (r'^events/$','archive_index', dict(info_dict, template_name='archive/list.html')), ) When I access /events/2010/may/12/this-is-a-slug I should get to the detail.html template, but instead I get a 404. What am I doing wrong?

    Read the article

  • Multi-client C# ODBC (Sybase/Oracle/MSSQL) table access question.

    - by Hamish Grubijan
    I am working on a feature that would allow clients pick a unique identifier (ci_name). The code below is a generic version that gets expanded to the right sql depending on the vendor. Hopefully it makes sense. #include "sql.h" create table client_identification ( ci_id TYPE_ID IDENTITY, ci_name varchar(64) not null, constraint ci_pk primary key (ci_name) ); go CREATE_SEQUENCE(ci_id) There will be simple stored procedures for adding, retrieving, and deleting these user records. This will be used by several admins. This will not happen very frequently, but there is still a possibility that something will be added or deleted since the list was initially retrieved. I have not yet decided if I need to detect the case of a double delete, but the user name cannot be created twice - primary key constraint will object. I want to be able to detect this particular case and display something like: "you snooze - you loose." :) I would like to leverage the pk constraint instead of doing some extra sql gymnastics. So, how can I detect this case cleanly, so that it works in MS SQL 2008, Sybase, and Oracle? I hope to do better than catch a general ODBC exception and parse out the text and look for what Sybase, Oracle, and MSSQL would give me back. Oracle is a little different. We actually prepend these variables to the Oracle version of stored procedures because they are not available otherwise: Vret_val out number, Vtran_count in out number, Vmessage_count in out number, Thanks. General helpful tips and comments are welcome, except for naming convention ones ( I do not have a choice here, plus I mangled the actual names a bit).

    Read the article

  • php script randomly hangs up

    - by sergdev
    I install php 5 (more precisely 5.3.1) as apache module. After this one of my application becomes randomly hang up on mysql_connect - sometimes works, sometimes no, sometimes reload of page helps. How can this be fixed? I use Windows Vista, Apache/2.2.14 (Win32) PHP/5.3.1 with php module, MySql 5.0.67-community-nt. After a minute I obtain the error message: Fatal error: Maximum execution time of 60 seconds exceeded in path\to\mysqlcon.php on line 9 I run MySql locally and heavy load could not be a reason: SHOW PROCESSLIST shows about 3 process SHOW VARIABLES LIKE 'MAX_CONNECTIONS' is 100. UPDATE: At first I thought that this is connected with mysql_connect. But now I can't say for certain. More difficult thing is when I insert the line to debug: $fh = fopen("E://_debugLog", 'a'); fwrite($fh, __FILE__ . " : " . __LINE__ . "\n"); fclose($fh); script start working near that location as a rule.

    Read the article

  • .NET binary serialization conditionally without ISerializable

    - by SillyWhy
    I got 2 classes, for example: public class A { private B b; ... } public class B { ... } I need to serialize an object A using BinaryFormatter. When remoting it shall include the field b, but not when serialize to file. Here is what I added: [Serializable] public class A : MarshalByRefObject { private B b; [OnSerializing] private void OnSerializing(StreamingContext context) { if (context.State == StreamingContextStates.File) { this.b = null; } } ... } [Serializable] public class B : MarshalByRefObject { ... } I think this is a bad design because if another class C also contains B, in class C we must add the duplicate OnSerializing() logic as in A. Class B should decide what to do, not class A or C. I don't want to use ISerializable interface because there are too many variables in class B have to be added to SerializationInfo. I can create a SerializationSurrogate for class B, which perform nothing in GetObjectData() & SetObjectData(), then use it when serializing to file. However the same maintenance issue because whoever modify class B can't notice what going to happen during serialization & the existence of SerializationSurrogate. Is there a better alternative?

    Read the article

  • Asp.Net 2 integrated sites How to Logout second site programatically.

    - by NBrowne
    Hi , I am working with an asp.net 2.0 site (call it site 1) which has an iframe in it which loads up another site (site2) which is also an asp.net site which is developed by our team. When you log onto site 1 then behind the scenes site 2 is also logged in so that when you click the iframe tab then this displays site 2 with the user logged in (to prevent the user from having to log in twice). The problem i have is that when a user logs out of site 1 then we call some cleanup methods to perform FormsAuthentication.SignOut and clean session variables etc but at the moment no cleanup is called when the user on site 2. So the issue is that if the user opens up Site 2 from within a browser then website 2 opens with the user still logged in which is undesired. Can anyone give me some guidance as to the best approach for this?? One possible approach i though of was just that on click of logout button i could do a call to a custom page on Site 2 which would do the logout. Code below HttpWebRequest request; request = ((HttpWebRequest)(WebRequest.Create("www.mywebsite.com/Site2Logout.aspx"))); request.Method = "POST"; HttpCookie cookie = HttpContext.Current.Request.Cookies[FormsAuthentication.FormsCookieName]; Cookie authenticationCookie = new Cookie( FormsAuthentication.FormsCookieName, cookie.Value, cookie.Path, HttpContext.Current.Request.Url.Authority); request .CookieContainer = new CookieContainer(); request .CookieContainer.Add(authenticationCookie); response.GetResponse(); Problem i am having with this code is that when i run it and debug on Site 2 and check to see if the user is Authenticated they are not which i dont understand because if i open browser and browse to Site 2 i am Still authenticated. Any ideas , different direction to take etc ??? Please let me know if you need any more info or if i something i have said dosent make sense. Thanks

    Read the article

  • Why does animate not work for the first few 'selections' of my next button?

    - by Josiah
    I'll just start right off the bat and say I'm fairly new to JQuery, so if you see some glaring issues with my code....let me know what I'm doing wrong! Either way, I've been working on a script to fade divs in and out using the z-index and animate. It "works" after about 2-3 clicks, but the first two clicks do not fade or animate as I was hoping....why is that? I'll just throw javascript up here, but if you need/want more code, just let me know. Thanks! $(document).ready(function() { //Slide rotation/movement variables var first = $('#main div.slide:first').attr('title'); var last = $('#main div.slide').length; //Needed for the next/prev buttons var next; //Set the first div to the front, and variable for first div var $active = $('#main div.slide[title='+first+']'); //Hide the links until the div is hovered over and take them away when mouse leaves $('#main').children('a').hide(); $('#main').mouseenter(function() { $('#main').children('a').fadeIn(750); }).mouseleave(function() { $('#main').children('a').fadeOut(750); }); $active.css('z-index', '4'); $('#main #next').click(function() { if ((next = parseInt($active.attr('title')) + 1) > last) { next = 1; } $active.css('z-index', '0').stop().animate({opacity: 0}, 1000); $active = $('#main div[title='+next+']').css('z-index', '4').stop().animate({opacity : 1}, 1000); }); });

    Read the article

  • Powershell: splatting after passing hashtable by reference

    - by user1815871
    Powershell newbie ... I recently learned about splatting — very useful. I ran into a snag when I passed a hash table by reference to a function for splatting purposes. (For brevity's sake — a silly example.) Function AllMyChildren { param ( [ref]$ReferenceToHash } get-childitem @ReferenceToHash.Value # etc.etc. } $MyHash = @{ 'path' = '*' 'include' = '*.ps1' 'name' = $null } AllMyChildren ([ref]$MyHash) Result: an error ("Splatted variables cannot be used as part of a property or array expression. Assign the result of the expression to a temporary variable then splat the temporary variable instead."). Tried this afterward: $newVariable = $ReferenceToHash.Value get-childitem @NewVariable That did work and seemed right per the error message. But: is it the preferred syntax in a case like this? (An oh, look, it actually worked solution isn't always a best practice. My approach here strikes me as "Perl-minded" and perhaps in Powershell passing by value is better, though I don't yet know the syntax for it w.r.t. a hash table.)

    Read the article

  • How to Prevent PostBack Event Handler from Firing

    - by user331744
    I have a custom class (ServerSideValidator.vb) that validates user input on server side (it doesn't use any of the .NET built in validators, therefore Page.Validate() is not an option for me). I am calling the Validate() method on page.IsPostback event and the class performs without any problem My issue is, when validation fails (returns false), I want to stop the postback event handler from firing, but load the page along with all the controls and user-input values in them. If I do, Response.End(), the page comes up blank. I can programmatically instruct the page to go to the previous page (original form before postback), but it loses all user-inputs. I thought of creating a global boolean variable in the page code behind file and check the value before performing any postback method, but this approach takes away from my plan to provide all functionalities inside the class itself. The page object is being referenced to ServerSideValidator. Seems like all the postback related properties/variables I come across inside Page class are 'Readonly' and I can't assign value(s) to control/prevent postback event from firing. Any idiea on how I can accomplish this? Please let me know if you need further details

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >