Search Results

Search found 20929 results on 838 pages for 'check mk'.

Page 59/838 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • how to check log in mysql

    - by joe
    How would i monitor a MySQL to detect SELECTs which are running slowly? Having identified a poorly performing SELECT, how would i analyse it with a view to improving it?

    Read the article

  • How would you refactor nested IF Statements?

    - by saunderl
    I was cruising around the programming blogosphere when I happened upon this post about GOTO's: http://giuliozambon.blogspot.com/2010/12/programmers-tabu.html Here the writer talks about how "one must come to the conclusion that there are situations where GOTOs make for more readable and more maintainable code" and then goes on to show an example similar to this: if (Check#1) { CodeBlock#1 if (Check#2) { CodeBlock#2 if (Check#3) { CodeBlock#3 if (Check#4) { CodeBlock#4 if (Check#5) { CodeBlock#5 if (Check#6) { CodeBlock#6 if (Check#7) { CodeBlock#7 } else { rest - of - the - program } } } } } } } The writer then proposes that using GOTO's would make this code much easier to read and maintain. I personally can think of at least 3 different ways to flatten it out and make this code more readable without resorting to flow-breaking GOTO's. Here are my two favorites. 1 - Nested Small Functions. Take each if and its code block and turn it into a function. If the boolean check fails, just return. If it passes, then call the next function in the chain. (Boy, that sounds a lot like recursion, could you do it in a single loop with function pointers?) 2 - Sentinal Variable. To me this is the easyest. Just use a blnContinueProcessing variable and check to see if it is still true in your if check. Then if the check fails, set the variable to false. How many different ways can this type of coding problem be refactored to reduce nesting and increase maintainability?

    Read the article

  • When modeling a virtual circuit board, what is the best design pattern to check for cycles?

    - by Wallace Brown
    To make it simple assume you have only AND and OR gates. Each has two inputs and one output. The output of two inputs can be used as an input for the next gate For example: A AND B - E C AND D - F E OR F - G Assuming an arbitrary number of gates, we want to check if the circuit ever connects back into itself at an earlier state? For example: E AND F - A This should be illegal since it creates an endless cycle. What design pattern would best be able to check for these cycles?

    Read the article

  • Is there a way to check if redistributed code has been altered?

    - by onlineapplab.com
    I would like to redistribute my app (PHP) in a way that the user gets the front end (presentation) layer which is using the API on my server through a web service. I want the user to be able to alter his part of the app but at the same time exclude such altered app from the normal support and offer support on pay by the hour basis. Is there a way to check if the source code was altered? Only solution I can think of would be to get check sums of all the files then send it through my API and compare them with the original app. Is there any more secure way to do it so it would be harder for the user to break such protection?

    Read the article

  • How to avoid or minimise use of check/conditional statement in my scenario?

    - by Muneeb Nasir
    I have scenario, where I got stream and I need to check for some value. If I got any new value I have to store it in any of data structure. It seems very easy, I can place conditional statement if-else or can use contain method of set/map to check either received is new or not. But the problem is checking will effect my application performance, in stream I will receive hundreds for value in second, if I start checking each and every value I received then for sure it effect performance. Anybody can suggest me any mechanism or algorithm to solve my issue, either by bypassing checks or at least minimize them?

    Read the article

  • how to avoid or minimise use of check/conditional statement?

    - by Muneeb Nasir
    I have scenario, where i got stream and i need to check for some value. if i got my any new value i have to store it in any of data structure. well it seems very easy, i can place conditional statement if-else or can use contain method of set/map to check either received is new or not. but the problem is checking will effect my application performance, in stream i'll receive hundreds for value in second, if i start checking each and every value i received than for sure it effect performance. Any body can suggest me any mechanism or algorithm that solve my issue. either by bypassing checks or atleast minimize them?

    Read the article

  • Check if the vector is behind another or maybe opposite directions?

    - by Gilson
    I'm doing a network game and on the client side, i interpolate the client position with the server sent extrapolated position. The client has its own physics simulation wich is corrected by the server in steps. The problem is when it laggs and i 'kick' the ball, the server gets a delayed message and sends me the position backwards of the client position wich makes the ball goes back and forth. I want to ignore those and maybe compensate that on the server, not sure though. The problem is the clock difference on those case are 0.07ms or 0.10 ms wich isn't that high to ignore the message i guess. When i get the server position, i extrapolate with the clock interval * serverBallVelocity Can i check if my new ball server position is behind my actual ball vector position? I tried to use the dot product after normalized the two vectors to check if they are opposite but it ain't working properly. Any suggestions on checking that?

    Read the article

  • Is it okay for an application to check for automatic updates in less than 20 hour interval?

    - by FlameStream
    I have a desktop application that has the ability to automatically update itself on the next restart (without the user's consent - but this is another issue altogether). Assuming that the user would never notice anything related to application updating (such as a progress bar, or pop-up requiring restart), and that our server would support the request spam load, is there any reason why it should not check for updates in less than 20 hour interval? The reason I'm asking this is because all applications that I know that have auto-update capability check for update every 20 to 24 hours and at startup. I was just wondering if there was an ethical rule about it, or simply because of the risk of overloading the server.

    Read the article

  • Configure PERL DBI and DBD in Linux

    - by Balualways
    I am new to Perl and I work in a Linux OEL 5x server. I am trying to configure the Perl DB modules for Oracle connectivity (DBD and DBI modules). Can anyone help me out in the installation procedure? I had tried CPAN didn't really worked out. Any help would be appreciated. I am not quite sure I need to initialize any variables other than $LD_LIBRARY_PATH and $ORACLE_HOME These are my observations: ISSUE:: I am getting the following issue while using the DBI module to connect to Oracle: install_driver(Oracle) failed: Can't locate loadable object for module DBD::Oracle in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a module that DBD::Oracle requires hasn't been fully installed at connectdb.pl line 57 I had installed the DBD for oracle from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD/DBD-Oracle-1.50 Could you please take a look into the steps and correct me if I am wrong: Observations: $ echo $LD_LIBRARY_PATH /opt/CA/UnicenterAutoSysJM/autosys/lib:/opt/CA/SharedComponents/Csam/SockAdapter/lib:/opt/CA/SharedComponents/ETPKI/lib:/opt/CA/CAlib $ echo $ORACLE_HOME /usr/local/oracle/ORA This is how I tried to install the DBD module: Download the file DBD 1.50 for Oracle Copy to /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD Untar and Makefile.PL . Message: Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Configuring DBD::Oracle for perl 5.008008 on linux (x86_64-linux-thread-multi) Remember to actually *READ* the README file! Especially if you have any problems. Installing on a linux, Ver#2.6 Using Oracle in /opt/oracle/product/10.2 DEFINE _SQLPLUS_RELEASE = "1002000400" (CHAR) Oracle version 10.2.0.4 (10.2) Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms64.mk Found /opt/oracle/product/10.2/rdbms/lib/ins_rdbms.mk Using /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Your LD_LIBRARY_PATH env var is set to '/usr/local/oracle/ORA/lib:/usr/dt/lib:/usr/openwin/lib:/usr/local/oracle/ORA/ows/cartx/wodbc/1.0/util/lib:/usr/local/oracle/ORA/lib:/usr/local/sybase/OCS-12_0/lib:/usr/local/sybase/lib:/home/oracle/jdbc/jdbcoci73/lib:./' WARNING: Your LD_LIBRARY_PATH env var doesn't include '/opt/oracle/product/10.2/lib' but probably needs to. Reading /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Reading /usr/local/oracle/ORA/rdbms/lib/env_rdbms.mk Attempting to discover Oracle OCI build rules sh: make: command not found by executing: [make -f /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk build ECHODO=echo ECHO=echo GENCLNTSH='echo genclntsh' CC=true OPTIMIZE= CCFLAGS= EXE=DBD_ORA_EXE OBJS=DBD_ORA_OBJ.o] WARNING: Oracle build rule discovery failed (32512) Add path to make command into your PATH environment variable. Oracle oci build prolog: [sh: make: command not found] Oracle oci build command: [] WARNING: Unable to interpret Oracle build commands from /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk. (Will continue by using fallback approach.) Please report this to [email protected]. See README for what to include. Found header files in /opt/oracle/product/10.2/rdbms/public. client_version=10.2 DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\"10.2.0.4\" -DORA_OCI_102 Checking for functioning wait.ph System: perl5.008008 linux ca-build9.us.oracle.com 2.6.20-1.3002.fc6xen #1 smp thu apr 30 18:08:39 pdt 2009 x86_64 x86_64 x86_64 gnulinux Compiler: gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm Linker: not found Sysliblist: -ldl -lm -lpthread -lnsl -lirc Oracle makefiles would have used these definitions but we override them: CC: cc CFLAGS: $(GFLAG) $(OPTIMIZE) $(CDEBUG) $(CCFLAGS) $(PFLAGS)\ $(SHARED_CFLAG) $(USRFLAGS) [$(GFLAG) -O3 $(CDEBUG) -m32 $(TRIGRAPHS_CCFLAGS) -fPIC -I/usr/local/oracle/ORA/rdbms/demo -I/usr/local/oracle/ORA/rdbms/public -I/usr/local/oracle/ORA/plsql/public -I/usr/local/oracle/ORA/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -fno-strict-aliasing $(LPFLAGS) $(USRFLAGS)] build: $(CC) $(ORALIBPATH) -o $(EXE) $(OBJS) $(OCISHAREDLIBS) [ cc -L$(LIBHOME) -L/usr/local/oracle/ORA/rdbms/lib/ -o $(EXE) $(OBJS) -lclntsh $(EXPDLIBS) $(EXOSLIBS) -ldl -lm -lpthread -lnsl -lirc -ldl -lm $(USRLIBS) -lpthread] LDFLAGS: $(LDFLAGS32) [-m32 -o $@ -L/usr/local/oracle/ORA/rdbms//lib32/ -L/usr/local/oracle/ORA/lib32/ -L/usr/local/oracle/ORA/lib32/stubs/] Linking with /usr/local/oracle/ORA/rdbms/lib/defopt.o -lclntsh -ldl -lm -lpthread -lnsl -lirc -ldl -lm -lpthread [from $(DEF_OPT) $(OCISHAREDLIBS)] Checking if your kit is complete... Looks good LD_RUN_PATH=/usr/local/oracle/ORA/lib Using DBD::Oracle 1.50. Using DBD::Oracle 1.50. Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Writing Makefile for DBD::Oracle Writing MYMETA.yml and MYMETA.json *** If you have problems... read all the log printed above, and the README and README.help.txt files. (Of course, you have read README by now anyway, haven't you?)

    Read the article

  • How to check if windows has automatic updates waiting from the command line?

    - by Lee
    If a Windows server (2k3/2k8) is set to "download but not install" updates, is there a way to check from the command line.. perhaps a log file or something I can check with powershell, to see if there are any updates actually waiting to be installed? I'm trying to prevent having to manually log on to each server to check, even though they want the "trigger" pulled manually. We have an automation system in place I can use (CA Autosys) - just not sure what to have it look for.

    Read the article

  • Why does integrity check fail for the 12.04.1 Alternate ISO?

    - by mghg
    I have followed various recommendations from the Ubuntu Documentation to create a bootable Ubuntu USB flash drive using the 12.04.1 Alternate install ISO-file for 64-bit PC. But the integrity test of the USB stick has failed and I do not see why. These are the steps I have made: Download of the 12.04.1 Alternate install ISO-file for 64-bit PC (ubuntu-12.04.1-alternate-amd64.iso) from http://releases.ubuntu.com/12.04.1/, as well as the MD5, SHA-1 and SHA-256 hash files and related PGP signatures Verification of the data integrity of the ISO-file using the MD5, SHA-1 and SHA-256 hash files, after having verified the hash files using the related PGP signature files (see e.g. https://help.ubuntu.com/community/HowToSHA256SUM and https://help.ubuntu.com/community/VerifyIsoHowto) Creation of a bootable USB stick using Ubuntu's Startup Disk Creator program (see http://www.ubuntu.com/download/help/create-a-usb-stick-on-ubuntu) Boot of my computer using the newly made 12.04.1 Alternate install on USB stick Selection of the option "Check disc for defects" (see https://help.ubuntu.com/community/Installation/CDIntegrityCheck) Steps 1, 2, 3 and 4 went without any problem or error messages. However, step 5 ended with an error message entitled "Integrity test failed" and with the following content: The ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default file failed the MD5 checksum verification. Your CD-ROM or this file may have been corrupted. I have experienced the same (might only be similar since I have no exact notes) error message in previous attempts using the 12.04 (i.e. not the maintenance release) Alternate install ISO-file. I have in these cases tried to install anyway and have so far not experienced any problems to my knowledge. Is failed integrity check described above a serious error? What is the solution? Or can it be ignored without further problems?

    Read the article

  • PCI-e SEALEVEL dual serial card not recognized on Ubuntu 14.04

    - by kwhunter
    New to Ubuntu, running 14.04. I have trouble with the serial port setup, I have a plug-in PCI-e SEALEVEL dual serial card that is not recognized. user1@WSIWORKSTATION2:~$ dmesg | grep tty [ 0.000000] console [tty0] enabled [ 1.577197] 00:06: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 1.943326] tty ttyprintk: hash matches [ 17.240880] ttyS4: LSR safety check engaged! [ 17.241589] ttyS4: LSR safety check engaged! [ 17.242354] ttyS5: LSR safety check engaged! [ 17.243058] ttyS5: LSR safety check engaged! [ 17.243918] ttyS6: LSR safety check engaged! [ 17.244485] ttyS6: LSR safety check engaged! [ 17.245195] ttyS7: LSR safety check engaged! [ 17.245830] ttyS7: LSR safety check engaged! [ 17.246554] ttyS8: LSR safety check engaged! [ 17.247191] ttyS8: LSR safety check engaged! user1@WSIWORKSTATION2:~/Desktop/seacom$ lspci -d 135e: -vvv 02:04.0 Bridge: Sealevel Systems Inc Device e205 (rev aa) Subsystem: Sealevel Systems Inc Device e205 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium TAbort- SERR- Capabilities: However, the manufacturer's drivers won't install: 1. user1@WSIWORKSTATION2:~/Desktop/seacom$ make install ---------------------------------------------------------------- Installing seacom suite. ---------------------------------------------------------------- --Installing utilities-- /usr/bin/ld: skipping incompatible ../../lib/libftd2xx.a when searching for -lftd2xx /usr/bin/ld: cannot find -lftd2xx collect2: error: ld returned 1 exit status make[2]: *** [setusb] Error 1 make[1]: *** [install] Error 1 make: *** [install] Error 1 2. user1@WSIWORKSTATION2:~/Desktop/seaio$ make install ---------------------------------------------------------------- Installing SD suite. ---------------------------------------------------------------- *Compiling SeaIO Library source file: digitalIoDevice.cpp gcc: error trying to exec 'cc1plus': execvp: No such file or directory make[1]: *** [digitalIoDevice.o] Error 1 make: *** [install] Error 1 I was not able to find a solution, so if anyone can help it will be much appreciated. PS Have some troubles with formatting my post, apologies...

    Read the article

  • What is the best practice for when to check if something needs to be done?

    - by changokun
    Let's say I have a function that does x. I pass it a variable, and if the variable is not null, it does some action. And I have an array of variables and I'm going to run this function on each one. Inside the function, it seems like a good practice is to check if the argument is null before proceeding. A null argument is not an error, it just causes an early return. I could loop through the array and pass each value to the function, and the function will work great. Is there any value to checking if the var is null and only calling the function if it is not null during the loop? This doubles up on the checking for null, but: Is there any gained value? Is there any gain on not calling a function? Any readability gain on the loop in the parent code? For the sake of my question, let's assume that checking for null will always be the case. I can see how checking for some object property might change over time, which makes the first check a bad idea. Pseudo code example: for(thing in array) { x(thing) } Versus: for(thing in array) { if(thing not null) x(thing) } If there are language-specific concerns, I'm a web developer working in PHP and JavaScript.

    Read the article

  • What is the best way to check if there is overlap between player and static, non-collidable items in bullet physic engine

    - by tigrou
    I'd like to add non collidable objects (eg: power ups, items, ...) in a game world using Bullet Physics Engine and to know if there is collision between player and them. Some info : there is a lot of items ( 1000), all are box shapes and they don't overlap. Here is things i have tried : btDbvt* bvtItems = new btDbvt(); //btDbvt is a hierachical AABB tree, used by Bullet foreach(var item ...) { btDbvtVolume volume = ... //compute item AABB; bvtItems->insert(volume, (void*)someExtraData); } Then, to find collisions between items and player : playerRigidBody->getAabb(min, max); btDbvtVolume playervolume = ... //compute player AABB bvtItems->collideTV(bvtItems->m_root, playervolume, *someCollisionHandler); This works fairly well (and its very fast), however, there is a problem : it only check items AABB against player AABB. That loss of precision is acceptable for items but not for player which is not a box. It would actually need another check to make sure player really collide with item but i don't know how to do this in Bullet. It would have been nice to have a function like this : playerRigidBody->checkCollisionWithAABB(); After doing trying that, I discovered that a btGhostObject exist and seems to have been made for that. I changed my code like this : foreach(var item...) { btCollisionObject* ghostObject = new btGhostObject(); ghostObject->setCollisionShape(boxShape); ghostObject->setCollisionFlags(ghostObject->getCollisionFlags() | btCollisionObject::CF_NO_CONTACT_RESPONSE); startTransform.setOrigin(...); //item position ghostObject->setWorldTransform(startTransform); dynamicsWorld->addCollisionObject(ghostObject, btBroadphaseProxy::SensorTrigger, btBroadphaseProxy:: CharacterFilter); } It also works ok, but there is a huge fps drop (almost ten times slower) which is not acceptable. Maybe there is something missing (forget set a flag) and Bullet is doing extra job for nothing or maybe all that ghostObjects are polluting broad phase and ghostObject is not the right thing for that. Any help would be appreciated.

    Read the article

  • Is there an open source version check library and web app?

    - by user52485
    I'm a developer for a cross platform (Win, MacOS, Linux) open source C++ application. I would like to have the program occasionally check for the latest version from our web site. Between the security, privacy, and cross platform network issues, I'd rather not roll our own solution. It seems like this is a common enough thing that there 'ought' to be a library/app which will do this. Unfortunately, the searches I've tried come up empty. Ideally, the web app would track requests and process the logs into some nice reports (number of users, what version, what platform, frequency of use, maybe even geographical info from IP address, etc.). While appropriately respecting privacy, etc. What pre-existing tools can help solve this problem? Edits: I am looking for a reporting tool, not a dependency checker. Our project has the challenge of keeping up with our users. Most do not join the mailing list. Our project has not been picked up by major distributions -- most of our users are Windows/MacOS anyway. When a new version comes out, we have no way of informing our users of its existence. Development is moving pretty fast, major features added every few months. We would like to provide the user with a way to check for an updated version. While we're at it, we would like to use these requests for some simple & anonymous usage tracking (X users running version Y with Z frequency, etc.). We do not need/want something that auto-updates or tracks dependencies on the system. We are not currently worried about update size -- when the user chooses to update, we expect them to download the complete latest version. We would like to keep this as simple as possible.

    Read the article

  • Check if a String is a double or an int?

    - by user69514
    I have a string for a Date in the form mm/dd, and I need to check if either the month or day was entered as a double public Date(String dateStr){ int slash = 0; //check slash is present try{ slash = dateStr.indexOf('/'); }catch(StringIndexOutOfBoundsException e){ error = "Invalid date format: " + dateStr; } //check if month is a number try{ month = Integer.parseInt(dateStr.substring(0, slash)); //day = Integer.parseInt(dateStr.substring(slash + 1, dateStr.length())); } catch(NumberFormatException e){ System.out.println("Invalid format for input string: " + dateStr.substring(0, slash)); } //check if day is a number try{ day = Integer.parseInt(dateStr.substring(slash + 1, dateStr.length())); } catch(NumberFormatException e){ System.out.println("Invalid format for input string: " + dateStr.substring(slash + 1, dateStr.length())); } //check if month was entered as a double }

    Read the article

  • Why it is necessary to put comments on check-ins? [closed]

    - by Mik Kardash
    In fact, I always have something to put in when I perform a check-in of my code. However, the question I have is - Is it really so necessary? Does it help so much? How? From one point of view, comments can help you to keep track of changes performed with every check-in. Thus, I will be able to analyze the changes and identify a hypothetic problem a little bit quicker. On the other hand, it takes some time to write useful information into check-in. Is it worth it? What are the pros and cons of writing comments to every check-in? Is there any way to write "efficient" check-in comments?

    Read the article

  • Android CheckBox -- Restoring State After Screen Rotation

    - by Jared M
    I have come across some very unexpected (and incredibly frustrating) functionality while trying to restore the state of a list of CheckBoxes after a screen rotation. I figured I first would try to give a textual explanation without the code, in case someone is able to determine a solution without all the gory details. If anyone needs more details I can post the code. I have a scrolling list of complex Views that contain CheckBoxes. I have been unsuccessful in restoring the state of these check boxes after a screen rotation. I have implemented onSaveInstanceState and have successfully transfered the list of selected check boxes to the onCreate method. This is handled by passing a long[] of database ids to the Bundle. In onCreate() I check the Bundle for the array of ids. If the array is there I use it to determine which check boxes to check when the list is being built. I have created a number of test methods and have confirmed that the check boxes are being set correctly, based on the id array. As a last check I am checking the states of all check boxes at the very end of onCreate(). Everything looks good... unless I rotate the screen. When I rotate the screen, one of two things happens: 1) If any number of the check boxes are selected, except for the last one, all check boxes are off after a rotation. 2) If the last check box is checked before rotation, then all check boxes are checked after rotation. Like I said, I check the state of the boxes at the very end of my onCreate(). The thing is, the state of the boxes at the end of onCreate is correct based on what I selected before the rotation. However, the state of the boxes on the screen does not reflect this. In addition, I have implemented each check box's setOnCheckChangedListener() and I have confirmed that my check boxes' state's are being altered after my onCreate method returns. Anyone have an idea of what is going on? Why would the state of my check boxes change after my onCreate method returns? Thanks in advance for your help. I have been trying to degub this for a couple days now. After I found that my check boxes were apparently changing somewhere outside my own code I figured it was time to ask around.

    Read the article

  • How can I check if Python´s ConfigParser has a section or not?

    - by Voidcode
    How can I check if the ConfigParser mainfile has a section or not, And if it don´t then add a new section to it? I am trying with this code: import ConfigParser config = ConfigParser.ConfigParser() #if the config file don´t has the section 'globel', then add it to the config-file if not config.has_section("globel"): config.add_section("globel") config.set("globel", "settings", "someting") config.write(open("/tmp/myfile.conf", "w")) print "The new sestion globel is now added!!" #else just return the the value of globle->settings else: config.read("/tmp/myfile.conf") print "else... globle->settings == "+config.get("globel", "settings")

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • How to stop jucheck from running? Java won't remember "Check for Updates Automatically" setting.

    - by Ken
    I've installed Java on Windows Vista, and every day I get a Vista security warning asking me if I want to run "jucheck". Apparently this is the Java automatic updater. Well, I don't want it to run on its own, ever. I cancel it, and quit it. I right-click on the taskbar and unclick "Check for Updates Automatically", and then click "Never Check", and "Apply". And yet, it never remembers this setting. If I come back to the "Java Control Panel" right after clicking "OK", the very same box is checked again, all on its own. Is there some way to kill jucheck once and for all? If I simply delete jucheck.exe, will Java (other than the automatic check) still work, and will manual updates still work, and will it stop even trying to update every morning?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >