Search Results

Search found 2167 results on 87 pages for 'uninitialized constant'.

Page 83/87 | < Previous Page | 79 80 81 82 83 84 85 86 87  | Next Page >

  • How to fix width of DIV that contains floated elements?

    - by joe
    I have a DIV container with several inner DIVs layed out by floating them all left. The inner DIVs may change width on certain events, and the containing DIV adjusts accordingly. I use float:left in the container to keep it shrunk to the width of the inner divs. I use float:left in the inner divs so the layout is clean even when their contents change. The catch is that I want the DIV container width and height to remain constant, UNLESS a particular event causes a change to the inner widths. Conceptually I want to use float on the inners for the layout benefit, but then I want to "fix" them so they don't float around. So if the container is 700px wide, I want it to remain so even if the user narrows the browser window. I'd like the container, and it's internal DIVs to just be clipped by the browser window. I sense this can all be done nicely in CSS, I just can't quite figure out how. I'm not averse to adding another container if necessary... Since the only desired layout changes are event-based, I am also willing to use a bit of JS. But is this necessary? (And I'm still not sure I know what to modify: container dimensions? inner floatiness? other?) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <style type="text/css"> #canvas { overflow:auto; /* for clearing floats */ } #container { float:left; /* so container shrinks around contained divs */ border:thin solid blue; } .inner { float:left; /* so inner elems line up nicely w/o saying fixed coords */ padding-top:8px; padding-bottom:4px; padding-left:80px; padding-right:80px; } #inner1 { background-color:#ffdddd; } #inner2 { background-color:#ddffdd; } #inner3 { background-color:#ddddff; } </style> </head> <body> <div id="canvas"> <div id="container"> <div id="inner1" class="inner"> inner 1 </div> <div id="inner2" class="inner"> inner 2 </div> <div id="inner3" class="inner"> inner 3 </div> </div> </div> cleared element </body> </html>

    Read the article

  • LLVM JIT segfaults. What am I doing wrong?

    - by bugspy.net
    It is probably something basic because I am just starting to learn LLVM.. The following creates a factorial function and tries to git and execute it (I know the generated func is correct because I was able to static compile and execute it). But I get segmentation fault upon execution of the function (in EE-runFunction(TheF, Args)) #include <iostream> #include "llvm/Module.h" #include "llvm/Function.h" #include "llvm/PassManager.h" #include "llvm/CallingConv.h" #include "llvm/Analysis/Verifier.h" #include "llvm/Assembly/PrintModulePass.h" #include "llvm/Support/IRBuilder.h" #include "llvm/Support/raw_ostream.h" #include "llvm/ExecutionEngine/JIT.h" #include "llvm/ExecutionEngine/GenericValue.h" using namespace llvm; Module* makeLLVMModule() { // Module Construction LLVMContext& ctx = getGlobalContext(); Module* mod = new Module("test", ctx); Constant* c = mod->getOrInsertFunction("fact64", /*ret type*/ IntegerType::get(ctx,64), IntegerType::get(ctx,64), /*varargs terminated with null*/ NULL); Function* fact64 = cast<Function>(c); fact64->setCallingConv(CallingConv::C); /* Arg names */ Function::arg_iterator args = fact64->arg_begin(); Value* x = args++; x->setName("x"); /* Body */ BasicBlock* block = BasicBlock::Create(ctx, "entry", fact64); BasicBlock* xLessThan2Block= BasicBlock::Create(ctx, "xlst2_block", fact64); BasicBlock* elseBlock = BasicBlock::Create(ctx, "else_block", fact64); IRBuilder<> builder(block); Value *One = ConstantInt::get(Type::getInt64Ty(ctx), 1); Value *Two = ConstantInt::get(Type::getInt64Ty(ctx), 2); Value* xLessThan2 = builder.CreateICmpULT(x, Two, "tmp"); //builder.CreateCondBr(xLessThan2, xLessThan2Block, cond_false_2); builder.CreateCondBr(xLessThan2, xLessThan2Block, elseBlock); /* Recursion */ builder.SetInsertPoint(elseBlock); Value* xMinus1 = builder.CreateSub(x, One, "tmp"); std::vector<Value*> args1; args1.push_back(xMinus1); Value* recur_1 = builder.CreateCall(fact64, args1.begin(), args1.end(), "tmp"); Value* retVal = builder.CreateBinOp(Instruction::Mul, x, recur_1, "tmp"); builder.CreateRet(retVal); /* x<2 */ builder.SetInsertPoint(xLessThan2Block); builder.CreateRet(One); return mod; } int main(int argc, char**argv) { long long x; if(argc > 1) x = atol(argv[1]); else x = 4; Module* Mod = makeLLVMModule(); verifyModule(*Mod, PrintMessageAction); PassManager PM; PM.add(createPrintModulePass(&outs())); PM.run(*Mod); // Now we going to create JIT ExecutionEngine *EE = EngineBuilder(Mod).create(); // Call the function with argument x: std::vector<GenericValue> Args(1); Args[0].IntVal = APInt(64, x); Function* TheF = cast<Function>(Mod->getFunction("fact64")) ; /* The following CRASHES.. */ GenericValue GV = EE->runFunction(TheF, Args); outs() << "Result: " << GV.IntVal << "\n"; delete Mod; return 0; }

    Read the article

  • C++: Declaration of template class member specialization

    - by Ziv
    When I specialize a (static) member function/constant in a template class, I'm confused as to where the declaration is meant to go. Here's an example of what I what to do - yoinked directly from IBM's reference on template specialization: ===IBM Member Specialization Example=== template<class T> class X { public: static T v; static void f(T); }; template<class T> T X<T>::v = 0; template<class T> void X<T>::f(T arg) { v = arg; } template<> char* X<char*>::v = "Hello"; template<> void X<float>::f(float arg) { v = arg * 2; } int main() { X<char*> a, b; X<float> c; c.f(10); // X<float>::v now set to 20 } The question is, how do I divide this into header/cpp files? The generic implementation is obviously in the header, but what about the specialization? It can't go in the header file, because it's concrete, leading to multiple definition. But if it goes into the .cpp file, is code which calls X::f() aware of the specialization, or might it rely on the generic X::f()? So far I've got the specialization in the .cpp only, with no declaration in the header. I'm not having trouble compiling or even running my code (on gcc, don't remember the version at the moment), and it behaves as expected - recognizing the specialization. But A) I'm not sure this is correct, and I'd like to know what is, and B) my Doxygen documentation comes out wonky and very misleading (more on that in a moment a later question). What seems most natural to me would be something like this, declaring the specialization in the header and defining it in the .cpp: ===XClass.hpp=== #ifndef XCLASS_HPP #define XCLASS_HPP template<class T> class X { public: static T v; static void f(T); }; template<class T> T X<T>::v = 0; template<class T> void X<T>::f(T arg) { v = arg; } /* declaration of specialized functions */ template<> char* X<char*>::v; template<> void X<float>::f(float arg); #endif ===XClass.cpp=== #include <XClass.hpp> /* concrete implementation of specialized functions */ template<> char* X<char*>::v = "Hello"; template<> void X<float>::f(float arg) { v = arg * 2; } ...but I have no idea if this is correct. Any ideas? Thanks much, Ziv

    Read the article

  • How do I evaluate my skillset against the current market to see what needs improvement and where my

    - by baijajusav
    First of all, this question may be out of bounds for this site. If so, remove it. I say this because this site seems to be a place for more concrete questions that are not so relative in nature. And before I begin, for those of you whom just prefer a question and not this sort of dialog, here is my question: How can I assess my current skills as a programmer and decide where and what areas to improve upon? That said, here's what I'm asking/talking about, in essence. The market is always in constant flux. As programmers we're always having to learn new things, update our skills, push ourselves into that next project. There's not a very good litmus test that I know of for us to get an idea of where we stand as programmers. I came across this blog post by Jeff Atwood talking about why can't programmers code. Instinctively (and as the post goes on to state) I rushed through the program in about 4 minutes (most of that time was b/c I was hand writing it out. Still, this doesn't really answer the question of where do my skills need to be to succeed in today's world. I real blogs, listen to podcasts, try to keep up on the latest things coming out. It has only been in the past couple of months that I made a decision to pick a focus area for my learning as I can't learn everything and trying to do so is to spread myself too thin. I chose ASP.NET MVC & C#. I plan to stick with Microsoft technologies, not out of some sense of loyalty or stubbornness, but rather because they seem to stream together and have a unifying connection between them. With Windows Phone 7 coming out, it seems that now is the obvious time to pick up WPF and Silverlight as well. Still, if you asked me to code something apart from intellisense and the internet, I probably couldn't get the syntax right. I don't have libraries memorized or know precisely where the classes I use exist within the .Net framework, namely because I haven't had to pull that knowledge out of the air. In a way, I suppose Visual Studio has insulated me, which isn't a good thing, but, at the same time, I've still been able to be productive. I'm working on my own side project to try and help my learning. In doing so, I'm trying to make use of best practices and 3rd party frameworks where I can. I'm using automapper and EF 1.0. I know everyone in the .net community seems to cry foul at the sound of EF 1.0, but I can't say why because I've never used it. There's no lazy loading and that has proven rather annoying; however, aside from that, I haven't had that much of an issue. Granted this is probably because I'm not writing tests as I go (which I'm not doing because I don't know how to test EF in tests and don't really have a clue how to write tests for ASP.NET MVC 1.0). I'm also using a custom membership provider; granted, it's a barebone implementation, but I'm using it still. My thinking in all of this is, while I am neglecting a great many important technologies that are in the mainstream, I'll have a working project in the end. I can come back and add those things after I finish. Doing it all now and at once seems like too much. I know how I work and I don't think I'd ever get it done that way. I've elected to make this a community wiki as I think this question might fight better there. If a moderator disagrees with that choice or the decision to post this here, the just delete the question. I'm not trying to make undue work for anyone. I'm just a programmer trying to assess my where his skills are now and where I should be improving.

    Read the article

  • How to resolve CGDirectDisplayID changing issues on newer multi-GPU Apple laptops in Core Foundation

    - by Dave Gallagher
    In Mac OS X, every display gets a unique CGDirectDisplayID number assigned to it. You can use CGGetActiveDisplayList() or [NSScreen screens] to access them, among others. Per Apple's docs: A display ID can persist across processes and system reboot, and typically remains constant as long as certain display parameters do not change. On newer mid-2010 MacBook Pro's, Apple started using auto-switching Intel/nVidia graphics. Laptops have two GPU's, a low-powered Intel, and a high-powered nVidia. Previous dual-GPU laptops (2009 models) didn't have auto-GPU switching, and required the user to make a settings change, logoff, and then logon again to make a GPU switch occur. Even older systems only had one GPU. There's an issue with the mid-2010 models where CGDirectDisplayID's don't remain the same when a display switches from one GPU to the next. For example: Laptop powers on. Built-In LCD Screen is driven by Intel chipset. Display ID: 30002 External Display is plugged in. Built-In LCD Screen switches to nVidia chipset. It's display ID changes: 30004 External Display is driven by nVidia chipset. ...at this point, the Intel chipset is dormant... User unplugs External Display. Built-In LCD Screen switches back to Intel chipset. It's display ID changes back to original: 30002 My question is, how can I match an old display ID to a new display ID when they alter due to a GPU change? Thought about: I've noticed that the display ID only changes by 2, but I don't have enough test Mac's available to determine if this is common to all new MacBook Pro's, or just mine. Kind of a kludge if "just check for display ID's which are +/-2 from one another" works, anyway. Tried: CGDisplayRegisterReconfigurationCallback(), which notifies before-and-after when displays are going to change, has no matching logic. Putting something like this inside a method registered with it doesn't work: // Run before display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef oldInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); // ...display settings change... // Run after display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef newInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); BOOL match = IODisplayMatchDictionaries(oldInfoDict, newInfoDict, 0); if (match) NSLog(@"Displays are a match"); else NSLog(@"Displays are not a match"); What's happening above is I'm caching oldInfoDict before display settings change, letting them change, and then comparing it to newInfoDict by using IODisplayMatchDictionaries(), which will say either "yes, both displays are the same!" or "no, both displays are not the same." Unfortunately, it does not return YES if GPU's have changed for a monitor. Example of the dictionary's it's comparing: // oldInfoDict (Display ID: 30002) oldInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/IGPU@2/AppleIntelFramebuffer/display0/AppleBacklightDisplay"; } // newInfoDict (Display ID: 30004) newInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/P0P2@1/IOPCI2PCIBridge/GFX0@0/NVDA,Display-A@0/NVDA/display0/AppleBacklightDisplay"; } As you can see, the IODisplayLocation key changes when GPU's are switched, hence IODisplayMatchDictionaries() doesn't work. I can, theoretically, compared just the DisplayProductID and DisplayVendorID keys, but I'm writing end-user software, and am worried of a situation where users have two or more identical monitors plugged in. Any help is greatly appreciated! :)

    Read the article

  • Is it possible to implement bitwise operators using integer arithmetic?

    - by Statement
    Hello World! I am facing a rather peculiar problem. I am working on a compiler for an architecture that doesn't support bitwise operations. However, it handles signed 16 bit integer arithmetics and I was wondering if it would be possible to implement bitwise operations using only: Addition (c = a + b) Subtraction (c = a - b) Division (c = a / b) Multiplication (c = a * b) Modulus (c = a % b) Minimum (c = min(a, b)) Maximum (c = max(a, b)) Comparisons (c = (a < b), c = (a == b), c = (a <= b), et.c.) Jumps (goto, for, et.c.) The bitwise operations I want to be able to support are: Or (c = a | b) And (c = a & b) Xor (c = a ^ b) Left Shift (c = a << b) Right Shift (c = a b) (All integers are signed so this is a problem) Signed Shift (c = a b) One's Complement (a = ~b) (Already found a solution, see below) Normally the problem is the other way around; how to achieve arithmetic optimizations using bitwise hacks. However not in this case. Writable memory is very scarce on this architecture, hence the need for bitwise operations. The bitwise functions themselves should not use a lot of temporary variables. However, constant read-only data & instruction memory is abundant. A side note here also is that jumps and branches are not expensive and all data is readily cached. Jumps cost half the cycles as arithmetic (including load/store) instructions do. On other words, all of the above supported functions cost twice the cycles of a single jump. Some thoughts that might help: I figured out that you can do one's complement (negate bits) with the following code: // Bitwise one's complement b = ~a; // Arithmetic one's complement b = -1 - a; I also remember the old shift hack when dividing with a power of two so the bitwise shift can be expressed as: // Bitwise left shift b = a << 4; // Arithmetic left shift b = a * 16; // 2^4 = 16 // Signed right shift b = a >>> 4; // Arithmetic right shift b = a / 16; For the rest of the bitwise operations I am slightly clueless. I wish the architects of this architecture would have supplied bit-operations. I would also like to know if there is a fast/easy way of computing the power of two (for shift operations) without using a memory data table. A naive solution would be to jump into a field of multiplications: b = 1; switch (a) { case 15: b = b * 2; case 14: b = b * 2; // ... exploting fallthrough (instruction memory is magnitudes larger) case 2: b = b * 2; case 1: b = b * 2; } Or a Set & Jump approach: switch (a) { case 15: b = 32768; break; case 14: b = 16384; break; // ... exploiting the fact that a jump is faster than one additional mul // at the cost of doubling the instruction memory footprint. case 2: b = 4; break; case 1: b = 2; break; }

    Read the article

  • I don't understand how work call_once

    - by SABROG
    Please help me understand how work call_once Here is thread-safe code. I don't understand why this need Thread Local Storage and global_epoch variables. Variable _fast_pthread_once_per_thread_epoch can be changed to constant/enum like {FAST_PTHREAD_ONCE_INIT, BEING_INITIALIZED, FINISH_INITIALIZED}. Why needed count calls in global_epoch? I think this code can be rewriting with logc: if flag FINISH_INITIALIZED do nothing, else go to block with mutexes and this all. #ifndef FAST_PTHREAD_ONCE_H #define FAST_PTHREAD_ONCE_H #include #include typedef sig_atomic_t fast_pthread_once_t; #define FAST_PTHREAD_ONCE_INIT SIG_ATOMIC_MAX extern __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; #ifdef __cplusplus extern "C" { #endif extern void fast_pthread_once( pthread_once_t *once, void (*func)(void) ); inline static void fast_pthread_once_inline( fast_pthread_once_t *once, void (*func)(void) ) { fast_pthread_once_t x = *once; /* unprotected access */ if ( x _fast_pthread_once_per_thread_epoch ) { fast_pthread_once( once, func ); } } #ifdef __cplusplus } #endif #endif FAST_PTHREAD_ONCE_H Source fast_pthread_once.c The source is written in C. The lines of the primary function are numbered for reference in the subsequent correctness argument. #include "fast_pthread_once.h" #include static pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER; /* protects global_epoch and all fast_pthread_once_t writes */ static pthread_cond_t cv = PTHREAD_COND_INITIALIZER; /* signalled whenever a fast_pthread_once_t is finalized */ #define BEING_INITIALIZED (FAST_PTHREAD_ONCE_INIT - 1) static fast_pthread_once_t global_epoch = 0; /* under mu */ __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; static void check( int x ) { if ( x == 0 ) abort(); } void fast_pthread_once( fast_pthread_once_t *once, void (*func)(void) ) { /*01*/ fast_pthread_once_t x = *once; /* unprotected access */ /*02*/ if ( x _fast_pthread_once_per_thread_epoch ) { /*03*/ check( pthread_mutex_lock(µ) == 0 ); /*04*/ if ( *once == FAST_PTHREAD_ONCE_INIT ) { /*05*/ *once = BEING_INITIALIZED; /*06*/ check( pthread_mutex_unlock(µ) == 0 ); /*07*/ (*func)(); /*08*/ check( pthread_mutex_lock(µ) == 0 ); /*09*/ global_epoch++; /*10*/ *once = global_epoch; /*11*/ check( pthread_cond_broadcast(&cv;) == 0 ); /*12*/ } else { /*13*/ while ( *once == BEING_INITIALIZED ) { /*14*/ check( pthread_cond_wait(&cv;, µ) == 0 ); /*15*/ } /*16*/ } /*17*/ _fast_pthread_once_per_thread_epoch = global_epoch; /*18*/ check (pthread_mutex_unlock(µ) == 0); } } This code from BOOST: #ifndef BOOST_THREAD_PTHREAD_ONCE_HPP #define BOOST_THREAD_PTHREAD_ONCE_HPP // once.hpp // // (C) Copyright 2007-8 Anthony Williams // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include #include #include #include "pthread_mutex_scoped_lock.hpp" #include #include #include namespace boost { struct once_flag { boost::uintmax_t epoch; }; namespace detail { BOOST_THREAD_DECL boost::uintmax_t& get_once_per_thread_epoch(); BOOST_THREAD_DECL extern boost::uintmax_t once_global_epoch; BOOST_THREAD_DECL extern pthread_mutex_t once_epoch_mutex; BOOST_THREAD_DECL extern pthread_cond_t once_epoch_cv; } #define BOOST_ONCE_INITIAL_FLAG_VALUE 0 #define BOOST_ONCE_INIT {BOOST_ONCE_INITIAL_FLAG_VALUE} // Based on Mike Burrows fast_pthread_once algorithm as described in // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2444.html template void call_once(once_flag& flag,Function f) { static boost::uintmax_t const uninitialized_flag=BOOST_ONCE_INITIAL_FLAG_VALUE; static boost::uintmax_t const being_initialized=uninitialized_flag+1; boost::uintmax_t const epoch=flag.epoch; boost::uintmax_t& this_thread_epoch=detail::get_once_per_thread_epoch(); if(epoch #endif I right understand, boost don't use atomic operation, so code from boost not thread-safe?

    Read the article

  • How can I make this Java code run faster?

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • Socket in C: recv overwrite a char[]

    - by Possa
    Hi all, I'm trying to make a little client-server script like many others that I've done in the past. But in this one I have a problem. It is better if I post the code and the output it give me. Code: #include <mysql.h> //not important now #include <stdlib.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <string.h> //constant definition #define SERVER_PORT 2121 #define LINESIZE 21 //global var definition char victim_ip[LINESIZE], file_write[LINESIZE], hacker_ip[LINESIZE]; //function void leggi (int); //not use now for debugging purpose //void scriviDB (); //not important now main () { int sock, client_len, fd; struct sockaddr_in server, client; // transport end point if((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) { perror("system call socket fail"); exit(1); } server.sin_family = AF_INET; server.sin_addr.s_addr = inet_addr("10.10.10.1"); server.sin_port = htons(SERVER_PORT); // binding address at transport end point if (bind(sock, (struct sockaddr *)&server, sizeof server) == -1) { perror("system call bind fail"); exit(1); } //fprintf(stderr, "Server open: listening.\n"); listen(sock, 5); /* managae client connection */ while (1) { client_len = sizeof(client); if ((fd = accept(sock, (struct sockaddr *)&client, &client_len)) < 0) { perror("accepting connection"); exit(1); } strcpy(hacker_ip, inet_ntoa(client.sin_addr)); printf("1 %s\n", hacker_ip); //debugging purpose //leggi(fd); ////////////////////////// //receive client recv(fd, victim_ip, LINESIZE, 0); victim_ip[sizeof(victim_ip)] = '\0'; printf("2 %s\n", hacker_ip); //debugging purpose recv(fd, file_write, LINESIZE, 0); file_write[sizeof(file_write)] = '\0'; printf("3 %s\n", hacker_ip); //debugging purpose printf("%s@%s for %s\n", file_write, victim_ip, hacker_ip); //send to client send(fd, hacker_ip, 40, 0); //now is hacker_ip for debug ///////////////////////// close(fd); }//end while exit(0); } //end main Client send string: ./send -i 10.10.10.4 -f filename.ext so the script send -i (IP) and -f (FILE) at the server. Here's my output server side: 1 10.10.10.6 2 10.10.10.6 3 [email protected] for As you can see the printf(3) and the printf(ip,file,ip) fail. I don't know how and where but someone overwrite my hacker_ip string. Thanks for your help! :)

    Read the article

  • How do I get a JPanel with an empty JLabel to take up space in a GridBagLayout

    - by user2888663
    I am working on a GUI for a project at school. I am using a GridBagLayout in swing. I want to have a label indicating the input(a type of file @ x = 0, y = 0), followed by another label(the actual file name once selected @ x = 1, y = 0), followed by a browse button for a file chooser( @ x = 2, y = 0). The label at (1,0) is initially blank, however I want the area that the text will occupy to take up some space when the label contains no text. I also want the space between the label at (0,0) and the button at (2,0) to remain constant. To achieve this, I'm trying to put the label onto a panel and then play with the layouts. However I can't seam to achieve the desired results. Could anyone offer some suggestions? The next three rows of the GridBagLayout will be laid out exactly the same way. Here is a link to a screen shot of the GUI. calibrationFileSelectionValueLabel = new JLabel("",Label.LEFT); calibrationFileSelectionValueLabel.setName("calibrationFileSelection"); calibrationFileSelectionValueLabel.setMinimumSize(new Dimension(100,0)); calibrationFileSelectionValuePanel = new JPanel(); calibrationFileSelectionValuePanel.setBorder(BorderFactory.createEtchedBorder()); calibrationFileSelectionValuePanel.add(calibrationFileSelectionValueLabel); c.gridx = 0; c.gridy = 0; c.fill = GridBagConstraints.NONE; filesPanel.add(calibrationFileLabel,c); c.gridy = 1; filesPanel.add(frequencyFileLabel,c); c.gridy = 2; filesPanel.add(sampleFileLabel,c); c.gridy = 3; filesPanel.add(outputFileLabel,c); c.gridx = 1; c.gridy = 0; c.fill = GridBagConstraints.BOTH; // filesPanel.add(calibrationFileSelection,c); filesPanel.add(calibrationFileSelectionValuePanel,c); c.gridy = 1; // filesPanel.add(frequencyFileSelection,c); filesPanel.add(frequencyFileSelectionValueLabel,c); c.gridy = 2; // filesPanel.add(sampleFileSelection,c); filesPanel.add(sampleFileSelectionValueLabel,c); c.gridy = 3; // filesPanel.add(outputFileSelection,c); filesPanel.add(outputFileSelectionValueLabel,c); c.gridx = 2; c.gridy = 0; c.fill = GridBagConstraints.NONE; filesPanel.add(calibrationFileSelectionButton,c); c.gridy = 1; filesPanel.add(frequencyFileSelectionButton,c); c.gridy = 2; filesPanel.add(sampleFileSelectionButton,c); c.gridy = 3; filesPanel.add(createOutputFileButton,c); panelForFilesPanelBorder = new JPanel(); panelForFilesPanelBorder.setBorder(BorderFactory.createCompoundBorder(new EmptyBorder(5,10,5,10), BorderFactory.createEtchedBorder())); panelForFilesPanelBorder.add(filesPanel); buttonsPanel = new JPanel(); buttonsPanel.setLayout(new FlowLayout(FlowLayout.CENTER)); buttonsPanel.setBorder(BorderFactory.createCompoundBorder(new EmptyBorder(5,10,10,10), BorderFactory.createEtchedBorder())); buttonsPanel.add(startButton); buttonsPanel.add(stopButton); basePanel.add(panelForFilesPanelBorder); basePanel.add(numericInputPanel); basePanel.add(buttonsPanel);

    Read the article

  • Random Complete System Unresponsiveness Running Mathematical Functions

    - by Computer Guru
    I have a program that loads a file (anywhere from 10MB to 5GB) a chunk at a time (ReadFile), and for each chunk performs a set of mathematical operations (basically calculates the hash). After calculating the hash, it stores info about the chunk in an STL map (basically <chunkID, hash>) and then writes the chunk itself to another file (WriteFile). That's all it does. This program will cause certain PCs to choke and die. The mouse begins to stutter, the task manager takes 2 min to show, ctrl+alt+del is unresponsive, running programs are slow.... the works. I've done literally everything I can think of to optimize the program, and have triple-checked all objects. What I've done: Tried different (less intensive) hashing algorithms. Switched all allocations to nedmalloc instead of the default new operator Switched from stl::map to unordered_set, found the performance to still be abysmal, so I switched again to Google's dense_hash_map. Converted all objects to store pointers to objects instead of the objects themselves. Caching all Read and Write operations. Instead of reading a 16k chunk of the file and performing the math on it, I read 4MB into a buffer and read 16k chunks from there instead. Same for all write operations - they are coalesced into 4MB blocks before being written to disk. Run extensive profiling with Visual Studio 2010, AMD Code Analyst, and perfmon. Set the thread priority to THREAD_MODE_BACKGROUND_BEGIN Set the thread priority to THREAD_PRIORITY_IDLE Added a Sleep(100) call after every loop. Even after all this, the application still results in a system-wide hang on certain machines under certain circumstances. Perfmon and Process Explorer show minimal CPU usage (with the sleep), no constant reads/writes from disk, few hard pagefaults (and only ~30k pagefaults in the lifetime of the application on a 5GB input file), little virtual memory (never more than 150MB), no leaked handles, no memory leaks. The machines I've tested it on run Windows XP - Windows 7, x86 and x64 versions included. None have less than 2GB RAM, though the problem is always exacerbated under lower memory conditions. I'm at a loss as to what to do next. I don't know what's causing it - I'm torn between CPU or Memory as the culprit. CPU because without the sleep and under different thread priorities the system performances changes noticeably. Memory because there's a huge difference in how often the issue occurs when using unordered_set vs Google's dense_hash_map. What's really weird? Obviously, the NT kernel design is supposed to prevent this sort of behavior from ever occurring (a user-mode application driving the system to this sort of extreme poor performance!?)..... but when I compile the code and run it on OS X or Linux (it's fairly standard C++ throughout) it performs excellently even on poor machines with little RAM and weaker CPUs. What am I supposed to do next? How do I know what the hell it is that Windows is doing behind the scenes that's killing system performance, when all the indicators are that the application itself isn't doing anything extreme? Any advice would be most welcome.

    Read the article

  • How do I create something like a negated character class with a string instead of characters?

    - by Chas. Owens
    I am trying to write a tokenizer for Mustache in Perl. I can easily handle most of the tokens like this: #!/usr/bin/perl use strict; use warnings; my $comment = qr/ \G \{\{ ! (?<comment> .+? ) }} /xs; my $variable = qr/ \G \{\{ (?<variable> .+? ) }} /xs; my $text = qr/ \G (?<text> .+? ) (?= \{\{ | \z ) /xs; my $tokens = qr/ $comment | $variable | $text /x; my $s = do { local $/; <DATA> }; while ($s =~ /$tokens/g) { my ($type) = keys %+; (my $contents = $+{$type}) =~ s/\n/\\n/; print "type [$type] contents [$contents]\n"; } __DATA__ {{!this is a comment}} Hi {{name}}, I like {{thing}}. But I am running into trouble with the Set Delimiters directive: #!/usr/bin/perl use strict; use warnings; my $delimiters = qr/ \G \{\{ (?<start> .+? ) = [ ] = (?<end> .+?) }} /xs; my $comment = qr/ \G \{\{ ! (?<comment> .+? ) }} /xs; my $variable = qr/ \G \{\{ (?<variable> .+? ) }} /xs; my $text = qr/ \G (?<text> .+? ) (?= \{\{ | \z ) /xs; my $tokens = qr/ $comment | $delimiters | $variable | $text /x; my $s = do { local $/; <DATA> }; while ($s =~ /$tokens/g) { for my $type (keys %+) { (my $contents = $+{$type}) =~ s/\n/\\n/; print "type [$type] contents [$contents]\n"; } } __DATA__ {{!this is a comment}} Hi {{name}}, I like {{thing}}. {{(= =)}} If I change it to my $delimiters = qr/ \G \{\{ (?<start> [^{]+? ) = [ ] = (?<end> .+?) }} /xs; It works fine, but the point of the Set Delimiters directive is to change the delimiters, so the code will wind up looking like my $variable = qr/ \G $start (?<variable> .+? ) $end /xs; And it is perfectly valid to say {{{== ==}}} (i.e. change the delimiters to {= and =}). What I want, but maybe not what I need, is the ability to say something like (?:not starting string)+?. I figure I am just going to have to give up being clean about it and drop code into the regex to force it to match only what I want. I am trying to avoid that for four reasons: I don't think it is very clean. It is marked as experimental. I am not very familier with it (I think it comes down to (?{CODE}) and returning special values. I am hoping someone knows some other exotic feature that I am not familiar with that fits the situation better (e.g. (?(condition)yes-pattern|no-pattern)). Just to make things clear (I hope), I am trying to match a constant length starting delimiter followed by the shortest string that allows a match and does not contain the starting delimiter followed by a space followed by an equals sign followed by the shortest string that allows a match that ends with the ending delimiter.

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • How to find minimum weight with maximum cost in 0-1 Knapsack algorithm?

    - by Nitin9791
    I am trying to solve a spoj problem Party Schedule the problem statement is- You just received another bill which you cannot pay because you lack the money. Unfortunately, this is not the first time to happen, and now you decide to investigate the cause of your constant monetary shortness. The reason is quite obvious: the lion's share of your money routinely disappears at the entrance of party localities. You make up your mind to solve the problem where it arises, namely at the parties themselves. You introduce a limit for your party budget and try to have the most possible fun with regard to this limit. You inquire beforehand about the entrance fee to each party and estimate how much fun you might have there. The list is readily compiled, but how do you actually pick the parties that give you the most fun and do not exceed your budget? Write a program which finds this optimal set of parties that offer the most fun. Keep in mind that your budget need not necessarily be reached exactly. Achieve the highest possible fun level, and do not spend more money than is absolutely necessary. Input The first line of the input specifies your party budget and the number n of parties. The following n lines contain two numbers each. The first number indicates the entrance fee of each party. Parties cost between 5 and 25 francs. The second number indicates the amount of fun of each party, given as an integer number ranging from 0 to 10. The budget will not exceed 500 and there will be at most 100 parties. All numbers are separated by a single space. There are many test cases. Input ends with 0 0. Output For each test case your program must output the sum of the entrance fees and the sum of all fun values of an optimal solution. Both numbers must be separated by a single space. Example Sample input: 50 10 12 3 15 8 16 9 16 6 10 2 21 9 18 4 12 4 17 8 18 9 50 10 13 8 19 10 16 8 12 9 10 2 12 8 13 5 15 5 11 7 16 2 0 0 Sample output: 49 26 48 32 now I know that it is an advance version of 0/1 knapsack problem where along with maximum cost we also have to find minimum weight that is less than a a given weight and have maximum cost. so I have used dp to solve this problem but still get a wrong awnser on submission while it is perfectly fine with given test cases. My code is typedef vector<int> vi; #define pb push_back #define FOR(i,n) for(int i=0;i<n;i++) int main() { //freopen("input.txt","r",stdin); while(1) { int W,n; cin>>W>>n; if(W==0 && n==0) break; int K[n+1][W+1]; vi val,wt; FOR(i,n) { int x,y; cin>>x>>y; wt.pb(x); val.pb(y); } FOR(i,n+1) { FOR(w,W+1) { if(i==0 || w==0) { K[i][w]=0; } else if (wt[i-1] <= w) { if(val[i-1] + K[i-1][w-wt[i-1]]>=K[i-1][w]) { K[i][w]=val[i-1] + K[i-1][w-wt[i-1]]; } else { K[i][w]=K[i-1][w]; } } else { K[i][w] = K[i-1][w]; } } } int a1=K[n][W],a2; for(int j=0;j<W;j++) { if(K[n][j]==a1) { a2=j; break; } } cout<<a2<<" "<<a1<<"\n"; } return 0; } Could anyone suggest what am I missing??

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Javascript phsyics in a 2d space

    - by eroo
    So, I am working on teaching myself Canvas (HTML5) and have most of a simple game engine coded up. It is a 2d representation of a space scene (planets, stars, celestial bodies, etc). My default "Sprite" class has a frame listener like such: "baseClass" contains a function that allows inheritance and applies "a" to "this.a". So, "var aTest = new Sprite({foo: 'bar'});" would make "aTest.foo = 'bar'". This is how I expose my objects to each other. { Sprite = baseClass.extend({ init: function(a){ baseClass.init(this, a); this.fields = new Array(); // list of fields of gravity one is in. Not sure if this is a good idea. this.addFL(function(tick){ // this will change to be independent of framerate soon. // and this is where I need help // gobjs is an array of all the Sprite objects in the "world". for(i = 0; i < gobjs.length; i++){ // Make sure its got setup correctly, make sure it -wants- gravity, and make sure it's not -this- sprite. if(typeof(gobjs[i].a) != undefined && !gobjs[i].a.ignoreGravity && gobjs[i].id != this.id){ // Check if it's within a certain range (obviously, gravity doesn't work this way... But I plan on having a large "space" area, // And I can't very well have all objects accounted for at all times, can I? if(this.distanceTo(gobjs[i]) < this.s.size*10 && gobjs[i].fields.indexOf(this.id) == -1){ gobjs[i].fields.push(this.id); } } } for(i = 0; i < this.fields.length; i++){ distance = this.distanceTo(gobjs[this.fields[i]]); angletosun = this.angleTo(gobjs[this.fields[i]])*(180/Math.PI); // .angleTo works very well, returning the angle in radians, which I convert to degrees here. // I have no idea what should happen here, although through trial and error (and attempting to read Maths papers on gravity (eeeeek!)), this sort of mimics gravity. // angle is its orientation, currently I assign a constant velocity to one of my objects, and leave the other static (it ignores gravity, but still emits it). this.a.angle = angletosun+(75+(distance*-1)/5); //todo: omg learn math if(this.distanceTo(gobjs[this.fields[i]]) > gobjs[this.fields[i]].a.size*10) this.fields.splice(i); // out of range, stop effecting. } }); } }); } Thanks in advance. The real trick is that one line: { this.a.angle = angletosun+(75+(distance*-1)/5); } This is more a physics question than Javascript, but I've searched and searched and read way to many wiki articles on orbital mathematics. It gets over my head very quickly. Edit: There is a weirdness with the SO formatting; forgives me, I is noobie.

    Read the article

  • How to close a JFrame in the middle of a program

    - by Nick Gibson
    public class JFrameWithPanel extends JFrame implements ActionListener, ItemListener { int packageIndex; double price; double[] prices = {49.99, 39.99, 34.99, 99.99}; DecimalFormat money = new DecimalFormat("$0.00"); JLabel priceLabel = new JLabel("Total Price: "+price); JButton button = new JButton("Check Price"); JComboBox packageChoice = new JComboBox(); JPanel pane = new JPanel(); TextField text = new TextField(5); JButton accept = new JButton("Accept"); JButton decline = new JButton("Decline"); JCheckBox serviceTerms = new JCheckBox("I Agree to the Terms of Service.", false); JTextArea termsOfService = new JTextArea("This is a text area", 5, 10); public JFrameWithPanel() { super("JFrame with Panel"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pane.add(packageChoice); setContentPane(pane); setSize(250,250); setVisible(true); packageChoice.addItem("A+ Certification"); packageChoice.addItem("Network+ Certification "); packageChoice.addItem("Security+ Certifictation"); packageChoice.addItem("CIT Full Test Package"); pane.add(button); button.addActionListener(this); pane.add(text); text.setEditable(false); text.setBackground(Color.WHITE); text.addActionListener(this); pane.add(termsOfService); termsOfService.setEditable(false); termsOfService.setBackground(Color.lightGray); pane.add(serviceTerms); serviceTerms.addItemListener(this); pane.add(accept); accept.addActionListener(this); pane.add(decline); decline.addActionListener(this); } public void actionPerformed(ActionEvent e) { packageIndex = packageChoice.getSelectedIndex(); price = prices[packageIndex]; text.setText("$"+price); Object source = e.getSource(); if(source == accept) { if(serviceTerms.isSelected() == false) { JOptionPane.showMessageDialog(null,"Please accept the terms of service.", "Terms of Service", JOptionPane.ERROR_MESSAGE); } else { JOptionPane.showMessageDialog(null,"Thank you. We will now move on to registering your product."); pane.dispose(); } } else if(source == decline) { System.exit(0); } } public void itemStateChanged(ItemEvent e) { int select = e.getStateChange(); } public static void main(String[] args) { String value1; int constant = 1, invalidNum = 0, answerParse, packNum, packPrice; JOptionPane.showMessageDialog(null,"Hello!"+"\nWelcome to the CIT Test Program."); JOptionPane.showMessageDialog(null,"IT WORKS!"); } }//class How do I get this frame to close so that my JOptionPane Message Dialogs can continue in the program without me exiting the program completely. EDIT: I tried .dispose() but I get this: cannot find symbol symbol : method dispose() location: class javax.swing.JPanel pane.dispose(); ^

    Read the article

  • Odd C++ template behaviour with static member vars

    - by jon hanson
    This piece of code is supposed to calculate an approximation to e (i.e. the mathematical constant ~ 2.71828183) at compile-time, using the following approach; e1 = 2 / 1 e2 = (2 * 2 + 1) / (2 * 1) = 5 / 2 = 2.5 e3 = (3 * 5 + 1) / (3 * 2) = 16 / 6 ~ 2.67 e4 = (4 * 16 + 1) / (4 * 6) = 65 / 24 ~ 2.708 ... e(i) = (e(i-1).numer * i + 1) / (e(i-1).denom * i) The computation is returned via the result static member however, after 2 iterations it yields zero instead of the expected value. I've added a static member function f() to compute the same value and that doesn't exhibit the same problem. #include <iostream> #include <iomanip> // Recursive case. template<int ITERS, int NUMERATOR = 2, int DENOMINATOR = 1, int I = 2> struct CalcE { static const double result; static double f () {return CalcE<ITERS, NUMERATOR * I + 1, DENOMINATOR * I, I + 1>::f ();} }; template<int ITERS, int NUMERATOR, int DENOMINATOR, int I> const double CalcE<ITERS, NUMERATOR, DENOMINATOR, I>::result = CalcE<ITERS, NUMERATOR * I + 1, DENOMINATOR * I, I + 1>::result; // Base case. template<int ITERS, int NUMERATOR, int DENOMINATOR> struct CalcE<ITERS, NUMERATOR, DENOMINATOR, ITERS> { static const double result; static double f () {return result;} }; template<int ITERS, int NUMERATOR, int DENOMINATOR> const double CalcE<ITERS, NUMERATOR, DENOMINATOR, ITERS>::result = static_cast<double>(NUMERATOR) / DENOMINATOR; // Test it. int main (int argc, char* argv[]) { std::cout << std::setprecision (8); std::cout << "e2 ~ " << CalcE<2>::result << std::endl; std::cout << "e3 ~ " << CalcE<3>::result << std::endl; std::cout << "e4 ~ " << CalcE<4>::result << std::endl; std::cout << "e5 ~ " << CalcE<5>::result << std::endl; std::cout << std::endl; std::cout << "e2 ~ " << CalcE<2>::f () << std::endl; std::cout << "e3 ~ " << CalcE<3>::f () << std::endl; std::cout << "e4 ~ " << CalcE<4>::f () << std::endl; std::cout << "e5 ~ " << CalcE<5>::f () << std::endl; return 0; } I've tested this with VS 2008 and VS 2010, and get the same results in each case: e2 ~ 2 e3 ~ 2.5 e4 ~ 0 e5 ~ 0 e2 ~ 2 e3 ~ 2.5 e4 ~ 2.6666667 e5 ~ 2.7083333 Why does result not yield the expected values whereas f() does? According to Rotsor's comment below, this does work with GCC, so I guess the question is, am i relying on some type of undefined behaviour with regards to static initialisation order, or is this a bug with Visual Studio?

    Read the article

  • Facebook Registration Plugin redirects in wrong situation

    - by AVProgrammer
    I am using the PHP Facebook SDK to leverage the Facebook registration/login plugin. I am referring to: http://www.masteringapi.com/tutorials/how-to-use-facebook-registration-plugin-as-your-registration-system/15/ Currently, the browser redirects away from the registration page, no matter if I am signed in to Facebook or not. <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId: '<?php echo $facebook->getAppID() ?>', cookie: true, xfbml: true, oauth: true }); FB.Event.subscribe('auth.login', function(response) { window.location.reload(); }); FB.Event.subscribe('auth.logout', function(response) { window.location.reload(); }); <?php if($_SERVER['PHP_SELF'] == '/register/index.php'){?> FB.getLoginStatus(function(response) { if (response.status == "connected" || response.status == "unknown") { window.location = "http://<?=SITE_HOST?>/"; } }); <?php }?> }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); The part that is causing this unwanted redirect is this: <?php if($_SERVER['PHP_SELF'] == '/register/index.php'){?> FB.getLoginStatus(function(response) { if (response.status == "connected" || response.status == "unknown") { window.location = "http://<?=SITE_HOST?>/"; } }); <?php }?> This function is supposed to redirect a "connected"1 user, away from the registration page. Unfortunately, even when I access this page from a brand-new browser (Firefox, just installed, which means no cookies or sessions exist) and I am still redirected. Note the PHP code below[2]. Also, the SITE_HOST is a constant for the www. address. I think the actual condition that is being met is response.status being equal to "unknown". This status check implies an unsuccessful response from Facebook, right? Without this getLoginStatus check in the code, it will actually load the the registration page, and a Facebook registration form: Note how it says: "You have registered", which I did a month ago (assuming this qualification is met by allowing the Facebook App permissions, which was done when I initially installed the Facebook SDK). So why would the redirect code work for me (seemingly), then still redirect me when I am using a browser NOT signed in to Facebook? Also, the jQuery function at the bottom of the script loads all.js. This is all I need, right? Footnotes: Someone with "connected" status is logged in to Facebook and has approved your Facebook App permissions. No matter which page this was on it caused this, and since this code appears in a global include file, I've restricting from printing to only on the registration page, which was the intention for this code to my understanding.

    Read the article

  • Help in building an 16 bit os

    - by Barshan Das
    I am trying to build an old 16 bit dos like os. My bootloader code: ; This is not my code. May be of Fritzos. I forgot the source. ORG 7c00h jmp Start drive db 0 msg db " Loader Initialization",0 msg2 db "ACos Loaded",0 print: lodsb cmp al, 0 je end mov ah, 0Eh int 10h jmp print end: ret Start: mov [ drive ], dl ; Get the floppy OS booted from ; Update the segment registers xor ax, ax ; XOR ax mov ds, ax ; Mov AX into DS mov si,msg call print ; Load Kernel. ResetFloppy: mov ax, 0x00 ; Select Floppy Reset BIOS Function mov dl, [ drive ] ; Select the floppy ADos booted from int 13h ; Reset the floppy drive jc ResetFloppy ; If there was a error, try again. ReadFloppy: mov bx, 0x9000 ; Load kernel at 9000h. mov ah, 0x02 ; Load disk data to ES:BX mov al, 17 ; Load two floppy head full's worth of data. mov ch, 0 ; First Cylinder mov cl, 2 ; Start at the 2nd Sector to load the Kernel mov dh, 0 ; Use first floppy head mov dl, [ drive ] ; Load from the drive kernel booted from. int 13h ; Read the floppy disk. jc ReadFloppy ; Error, try again. ; Clear text mode screen mov ax, 3 int 10h ;print starting message mov si,msg2 call print mov ax, 0x0 mov ss, ax mov sp, 0xFFFF jmp 9000h ; This part makes sure the bootsector is 512 bytes. times 510-($-$$) db 0 ;bootable sector signature dw 0xAA55 My example kernel code: asm(".code16\n"); void putchar(char); int main() { putchar('A'); return 0; } void putchar(char val) { asm("movb %0, %%al\n" "movb $0x0E, %%ah\n" "int $0x10\n" : :"r"(val) ) ; } This is how I compile it : nasm -f bin -o ./bin/boot.bin ./source/boot.asm gcc -nostdinc -fno-builtin -I./include -c -o ./bin/kernel.o ./source/kernel.c ld -Ttext=0x9000 -o ./bin/kernel.bin ./bin/kernel.o -e 0x0 dd if=/dev/zero of=./bin/empty.bin bs=1440K count=1 cat ./bin/boot.bin ./bin/kernel.bin ./bin/empty.bin|head -c 1440K > ./bin/os rm ./bin/empty.bin and I run it in virtual machine. When I make the putchar function ( in kernel code ) for constant value ....i.e like this: void putchar() { char val = 'A'; asm("movb %0, %%al\n" "movb $0x0E, %%ah\n" "int $0x10\n" : :"r"(val) ) ; } then it works fine. But when I pass argument to it ( That is in the previous code ) , then it prints a space for any character. What should I do?

    Read the article

  • Decomposing a rotation matrix

    - by DeadMG
    I have a rotation matrix. How can I get the rotation around a specified axis contained within this matrix? Edit: It's a 3D matrix (4x4), and I want to know how far around a predetermined (not contained) axis the matrix rotates. I can already decompose the matrix but D3DX will only give the entire matrix as one rotation around one axis, whereas I need to split the matrix up into angle of rotation around an already-known axis, and the rest. Sample code and brief problem description: D3DXMATRIX CameraRotationMatrix; D3DXVECTOR3 CameraPosition; //D3DXVECTOR3 CameraRotation; inline D3DXMATRIX GetRotationMatrix() { return CameraRotationMatrix; } inline void TranslateCamera(float x, float y, float z) { D3DXVECTOR3 rvec, vec(x, y, z); #pragma warning(disable : 4238) D3DXVec3TransformNormal(&rvec, &vec, &GetRotationMatrix()); #pragma warning(default : 4238) CameraPosition += rvec; RecomputeVPMatrix(); } inline void RotateCamera(float x, float y, float z) { D3DXVECTOR3 RotationRequested(x, y, z); D3DXVECTOR3 XAxis, YAxis, ZAxis; D3DXMATRIX rotationx, rotationy, rotationz; XAxis = D3DXVECTOR3(1, 0, 0); YAxis = D3DXVECTOR3(0, 1, 0); ZAxis = D3DXVECTOR3(0, 0, 1); #pragma warning(disable : 4238) D3DXVec3TransformNormal(&XAxis, &XAxis, &GetRotationMatrix()); D3DXVec3TransformNormal(&YAxis, &YAxis, &GetRotationMatrix()); D3DXVec3TransformNormal(&ZAxis, &ZAxis, &GetRotationMatrix()); #pragma warning(default : 4238) D3DXMatrixIdentity(&rotationx); D3DXMatrixIdentity(&rotationy); D3DXMatrixIdentity(&rotationz); D3DXMatrixRotationAxis(&rotationx, &XAxis, RotationRequested.x); D3DXMatrixRotationAxis(&rotationy, &YAxis, RotationRequested.y); D3DXMatrixRotationAxis(&rotationz, &ZAxis, RotationRequested.z); CameraRotationMatrix *= rotationz; CameraRotationMatrix *= rotationy; CameraRotationMatrix *= rotationx; RecomputeVPMatrix(); } inline void RecomputeVPMatrix() { D3DXMATRIX ProjectionMatrix; D3DXMatrixPerspectiveFovLH( &ProjectionMatrix, FoV, (float)D3DDeviceParameters.BackBufferWidth / (float)D3DDeviceParameters.BackBufferHeight, FarPlane, NearPlane ); D3DXVECTOR3 CamLookAt; D3DXVECTOR3 CamUpVec; #pragma warning(disable : 4238) D3DXVec3TransformNormal(&CamLookAt, &D3DXVECTOR3(1, 0, 0), &GetRotationMatrix()); D3DXVec3TransformNormal(&CamUpVec, &D3DXVECTOR3(0, 1, 0), &GetRotationMatrix()); #pragma warning(default : 4238) D3DXMATRIX ViewMatrix; #pragma warning(disable : 4238) D3DXMatrixLookAtLH(&ViewMatrix, &CameraPosition, &(CamLookAt + CameraPosition), &CamUpVec); #pragma warning(default : 4238) ViewProjectionMatrix = ViewMatrix * ProjectionMatrix; D3DVIEWPORT9 vp = { 0, 0, D3DDeviceParameters.BackBufferWidth, D3DDeviceParameters.BackBufferHeight, 0, 1 }; D3DDev->SetViewport(&vp); } Effectively, after a certain time, when RotateCamera is called, it begins to rotate in the relative X axis- even though constant zero is passed in for that request when responding to mouse input, so I know that when moving the mouse, the camera should not roll at all. I tried spamming 0,0,0 requests and saw no change (one per frame at 1500 frames per second), so I'm fairly sure that I'm not seeing FP error or matrix accumulation error. I tried writing a RotateCameraYZ function and stripping all X-axis from the function. I've spent several days trying to discover why this is the case, and eventually decided on just hacking around it. Just for reference, I've seen some diagrams on Wikipedia, and I actually have a relatively strange axis layout, which is Y axis up, but X axis forwards and Z axis right, so Y axis yaw, Z axis pitch, X axis roll.

    Read the article

  • Java algorithm for normalizing audio

    - by Marty Pitt
    I'm trying to normalize an audio file of speech. Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter. I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak. I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here) @Override public void onAudioSamples(IAudioSamplesEvent event) { // get the raw audio byes and adjust it's value ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer(); for (int i = 0; i < buffer.limit(); ++i) buffer.put(i, (short)(buffer.get(i) * mVolume)); super.onAudioSamples(event); } Here, they modify the bytes in getAudioSamples() by a constant of mVolume. Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value). I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next. Here's an abridged version of what I'm currently doing. Step 1: Find peaks in file: Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples @Override public void onAudioSamples(IAudioSamplesEvent event) { IAudioSamples audioSamples = event.getAudioSamples(); ShortBuffer buffer = audioSamples.getByteBuffer().asShortBuffer(); short min = Short.MAX_VALUE; short max = Short.MIN_VALUE; for (int i = 0; i < buffer.limit(); ++i) { short value = buffer.get(i); min = (short) Math.min(min, value); max = (short) Math.max(max, value); } // assign of min/max ommitted for brevity. super.onAudioSamples(event); } Step 2: Normalize all values: In a loop similar to step1, replace the buffer with normalized values, calling: buffer.put(i, normalize(buffer.get(i)); public short normalize(short value) { if (isBackgroundNoise(value)) return value; short rawMin = // min from step1 short rawMax = // max from step1 short targetRangeMin = 1000; short targetRangeMax = 8000; int abs = Math.abs(value); double a = (abs - rawMin) * (targetRangeMax - targetRangeMin); double b = (rawMax - rawMin); double result = targetRangeMin + ( a/b ); // Copy the sign of value to result. result = Math.copySign(result,value); return (short) result; } Questions: Is this a valid approach for attempting to normalize an audio file? Is my math in normalize() valid? Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?

    Read the article

  • Problem with PHP; Posting Hidden Value!!?

    - by Derek
    Hi, I have a page which basically allows an admin user to create manager user types (basically a register function. So when the values are submitted, they are stored into the DB, very very basic stuff. However, I have a hidden variable type..reasons are I have 3 different user levels and I have declared they identification as an integer (e.g. 7 = manager, 8 =user etc.) Can someone help me out with how to correctly pass this hidden value so it stores in the database... Here is my form: <form id="userreg" name="userreg" method="post" action="adminadduser-process.php"> <label>Full Name:</label> <input name="fullname" size="40" id="fullname" value="<?php if (isset($_POST['fullname'])); ?>"/> <br /> <label>Username:</label> <input name="username" size="40" id="username" value="<?php if (isset($_POST['username'])); ?>"/> <br /> <label>Password:</label> <input name="password" size="40" id="password" value="<?php if (isset($_POST['password'])); ?>"/> <br /> <label>Email Address:</label> <input name="emailaddress" size="40" id="emailaddress" value="<?php if (isset($_POST['emailaddress'])); ?>"/> <br /> <input name="userlevel" type="hidden" size="1" id="userlevel" value="<?php $_POST[5]; ?>" /> <br /> <input value="Add User" class="addbtn" type="submit" /> </form></div> Next, here is the script that runs the query: <?php require_once "config.php"; $fullname = $_POST['fullname']; $username = $_POST['username']; $password = $_POST['password']; $emailaddress = $_POST['emailaddress']; $userlevel = $_POST[5]; $sql = "INSERT INTO users_tb VALUES('".$user_id."','".$fullname."','".$username."',MD5('".$password."'),'".$emailaddress."','".$userlevel."')"; $result = mysql_query($sql, $connection) or die("MySQL Error: ".mysql_error()); header("Location: adminhome.php"); exit(); ?> I'm basically trying to pass the hidden typem with a constant value of '5' just for this form, as it will not be changed...also while im here, for some reason, the 'fullname' is not stored in the DB either!!?? WTH?? all other fields are processed fine. Any help is much appreciated! Thank you.

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • count specific things within a code in c++

    - by shap
    can anyone help me make this more generalised and more pro? #include <fstream> #include <iostream> #include <string> #include <vector> using namespace std; int main() { // open text file for input: string file_name; cout << "please enter file name: "; cin >> file_name; // associate the input file stream with a text file ifstream infile(file_name.c_str()); // error checking for a valid filename if ( !infile ) { cerr << "Unable to open file " << file_name << " -- quitting!\n"; return( -1 ); } else cout << "\n"; // some data structures to perform the function vector<string> lines_of_text; string textline; // read in text file, line by while (getline( infile, textline, '\n' )) { // add the new element to the vector lines_of_text.push_back( textline ); // print the 'back' vector element - see the STL documentation cout << lines_of_text.back() << "\n"; } cout<<"OUTPUT BEGINS HERE: "<<endl<<endl; cout<<"the total capacity of vector: lines_of_text is: "<<lines_of_text.capacity()<<endl; int PLOC = (lines_of_text.size()+1); int numbComments =0; int numbClasses =0; cout<<"\nThe total number of physical lines of code is: "<<PLOC<<endl; for (int i=0; i<(PLOC-1); i++) //reads through each part of the vector string line-by-line and triggers if the //it registers the "//" which will output a number lower than 100 (since no line is 100 char long and if the function does not //register that character within the string, it outputs a public status constant that is found in the class string and has a huge value //alot more than 100. { string temp(lines_of_text [i]); if (temp.find("//")<100) numbComments +=1; } cout<<"The total number of comment lines is: "<<numbComments<<endl; for (int j=0; j<(PLOC-1); j++) { string temp(lines_of_text [j]); if (temp.find("};")<100) numbClasses +=1; } cout<<"The total number of classes is: "<<numbClasses<<endl;

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87  | Next Page >