Search Results

Search found 27011 results on 1081 pages for 'build vs buy'.

Page 100/1081 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Normal C++ code in Qt doesnt build and run

    - by Nick
    Hello. I am using Qt under linux, if it matters. I ran successfully under Geany (a simple c++ compiler) the following: //my first program in C++ Hello World! include using namespace std; int main () {cout << "Hello World!"; return 0;} I opened Qt source file and copied the exact same code and i can't build or run. Thank you for your responses to this simple problem.

    Read the article

  • Build comma seperated string from the struct in C#

    - by acadia
    Hello, I have the following struct in C# class public struct Employee { public const string EMPID = "EMP_ID"; public const string FName = "FIRST_NAME"; public const string LNAME = "LAST_NAME"; public const string DEPTID = "DEPT_ID"; } Is there an easy way to build a string as follows const string mainquery="INSERT INTO EMP(EMP_ID,FIRST_NAME,LAST_NAME,DEPT_ID) VALUES(:EMP_ID,:FIRST_NAME,:LAST_NAME,:DEPT_ID)" Instead of doing as follows and then concatenating it. const string EMP_COLS= EMPLOYEE.EMPID + "," + EMPLOYEE.FNAME + "," + EMPLOYEE.LNAME + "," + EMPLOYEE.DEPTID; const string EMP_Values= EMPLOYEE.EMPID + ":" + EMPLOYEE.FNAME + ":" + EMPLOYEE.LNAME + ":" + EMPLOYEE.DEPTID;

    Read the article

  • PHP oop build array

    - by Industrial
    Hi! If I would need to build up an array with OOP based PHP, would this be the proper way to do it? class MyClass { $array = array(); function addElement($value) { $this->array[] = $value; } function fetch() { $return = $this->memcached->getMulti($this->array); return $return; } } PHP file where it will be used: <?php $this->myClass->addElement('key1'); $this->myClass->addElement('key1'); $this->myClass->addElement('key1'); $var = $this->myClass->fetch(); Thanks a lot

    Read the article

  • Visual studio 2010 doesn't make the changes when i do a build/compile

    - by Tom
    Ok so i change some code, re-build it and then, say for arguments sake i had a print statement outputting 'test2', well if i change it to 'test3' its still re-producing the old code 'test2'. Ive deleted the debug folder and rebuilt but no good. Then randomly about 10 builds later it will catch up. Ive also closed VS2010 and then re-opened the project but that doesnt help. What can i do as i need to see the changes asap? ps it's definitely the correct file

    Read the article

  • openvpn WARNING: No server certificate verification method has been enabled

    - by tmedtcom
    I tried to install openvpn on debian squeez (server) and connect from my fedora 17 as (client). Here is my configuration: server configuration ###cat server.conf # Serveur TCP ** proto tcp** port 1194 dev tun # Cles et certificats ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key dh /etc/openvpn/easy-rsa/keys/dh1024.pem # Reseau #Adresse virtuel du reseau vpn server 192.170.70.0 255.255.255.0 #Cette ligne ajoute sur le client la route du reseau vers le serveur push "route 192.168.1.0 255.255.255.0" #Creer une route du server vers l'interface tun. #route 192.170.70.0 255.255.255.0 # Securite keepalive 10 120 #type d'encryptage des données **cipher AES-128-CBC** #activation de la compression comp-lzo #nombre maximum de clients autorisés max-clients 10 #pas d'utilisateur et groupe particuliers pour l'utilisation du VPN user nobody group nogroup #pour rendre la connexion persistante persist-key persist-tun #Log d'etat d'OpenVPN status /var/log/openvpn-status.log #logs openvpnlog /var/log/openvpn.log log-append /var/log/openvpn.log #niveau de verbosité verb 5 ###cat client.conf # Client client dev tun [COLOR="Red"]proto tcp-client[/COLOR] remote <my server wan IP> 1194 resolv-retry infinite **cipher AES-128-CBC** # Cles ca ca.crt cert client.crt key client.key # Securite nobind persist-key persist-tun comp-lzo verb 3 Message from the host client (fedora 17) in the log file / var / log / messages: Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> Starting VPN service 'openvpn'... Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN service 'openvpn' started (org.freedesktop.NetworkManager.openvpn), PID 7470 Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN service 'openvpn' appeared; activating connections Dec 6 21:56:00 GlobalTIC NetworkManager[691]: <info> VPN plugin state changed: starting (3) Dec 6 21:56:01 GlobalTIC NetworkManager[691]: <info> VPN connection 'Connexion VPN 1' (Connect) reply received. Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: OpenVPN 2.2.2 x86_64-redhat-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] built on Sep 5 2012 Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]:[COLOR="Red"][U][B] WARNING: No server certificate verification method has been enabled.[/B][/U][/COLOR] See http://openvpn.net/howto.html#mitm for more info. Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]:[COLOR="Red"] WARNING: file '/home/login/client/client.key' is group or others accessible[/COLOR] Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: UDPv4 link local: [undef] Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: UDPv4 link remote: [COLOR="Red"]<my server wan IP>[/COLOR]:1194 Dec 6 21:56:01 GlobalTIC nm-openvpn[7472]: [COLOR="Red"]read UDPv4 [ECONNREFUSED]: Connection refused (code=111)[/COLOR] Dec 6 21:56:03 GlobalTIC nm-openvpn[7472]: [COLOR="Red"]read UDPv4[/COLOR] [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:07 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:15 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:31 GlobalTIC nm-openvpn[7472]: read UDPv4 [ECONNREFUSED]: Connection refused (code=111) Dec 6 21:56:41 GlobalTIC NetworkManager[691]: <warn> VPN connection 'Connexion VPN 1' (IP Conf[/CODE] ifconfig on server host(debian): ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:16:21:ac inet addr:192.168.1.6 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe16:21ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9059 errors:0 dropped:0 overruns:0 frame:0 TX packets:5660 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:919427 (897.8 KiB) TX bytes:1273891 (1.2 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.170.70.1 P-t-P:192.170.70.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifconfig on the client host (fedora 17) as0t0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.0.1 netmask 255.255.252.0 destination 5.5.0.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t1: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.4.1 netmask 255.255.252.0 destination 5.5.4.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t2: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.8.1 netmask 255.255.252.0 destination 5.5.8.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 as0t3: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1500 inet 5.5.12.1 netmask 255.255.252.0 destination 5.5.12.1 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 200 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 2 bytes 321 (321.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 **p255p1**: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::21d:baff:fe20:b7e6 prefixlen 64 scopeid 0x20<link> ether 00:1d:ba:20:b7:e6 txqueuelen 1000 (Ethernet) RX packets 4842070 bytes 3579798184 (3.3 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3996158 bytes 2436442882 (2.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 16 p255p1 is label for eth0 interface and on the server : root@hoteserver:/etc/openvpn# tree . +-- client ¦** +-- ca.crt ¦** +-- client.conf ¦** +-- client.crt ¦** +-- client.csr ¦** +-- client.key ¦** +-- client.ovpn ¦* ¦** +-- easy-rsa ¦** +-- build-ca ¦** +-- build-dh ¦** +-- build-inter ¦** +-- build-key ¦** +-- build-key-pass ¦** +-- build-key-pkcs12 ¦** +-- build-key-server ¦** +-- build-req ¦** +-- build-req-pass ¦** +-- clean-all ¦** +-- inherit-inter ¦** +-- keys ¦** ¦** +-- 01.pem ¦** ¦** +-- 02.pem ¦** ¦** +-- ca.crt ¦** ¦** +-- ca.key ¦** ¦** +-- client.crt ¦** ¦** +-- client.csr ¦** ¦** +-- client.key ¦** ¦** +-- dh1024.pem ¦** ¦** +-- index.txt ¦** ¦** +-- index.txt.attr ¦** ¦** +-- index.txt.attr.old ¦** ¦** +-- index.txt.old ¦** ¦** +-- serial ¦** ¦** +-- serial.old ¦** ¦** +-- server.crt ¦** ¦** +-- server.csr ¦** ¦** +-- server.key ¦** +-- list-crl ¦** +-- Makefile ¦** +-- openssl-0.9.6.cnf.gz ¦** +-- openssl.cnf ¦** +-- pkitool ¦** +-- README.gz ¦** +-- revoke-full ¦** +-- sign-req ¦** +-- vars ¦** +-- whichopensslcnf +-- openvpn.log +-- openvpn-status.log +-- server.conf +-- update-resolv-conf on the client: [login@hoteclient openvpn]$ tree . |-- easy-rsa | |-- 1.0 | | |-- build-ca | | |-- build-dh | | |-- build-inter | | |-- build-key | | |-- build-key-pass | | |-- build-key-pkcs12 | | |-- build-key-server | | |-- build-req | | |-- build-req-pass | | |-- clean-all | | |-- list-crl | | |-- make-crl | | |-- openssl.cnf | | |-- README | | |-- revoke-crt | | |-- revoke-full | | |-- sign-req | | `-- vars | `-- 2.0 | |-- build-ca | |-- build-dh | |-- build-inter | |-- build-key | |-- build-key-pass | |-- build-key-pkcs12 | |-- build-key-server | |-- build-req | |-- build-req-pass | |-- clean-all | |-- inherit-inter | |-- keys [error opening dir] | |-- list-crl | |-- Makefile | |-- openssl-0.9.6.cnf | |-- openssl-0.9.8.cnf | |-- openssl-1.0.0.cnf | |-- pkitool | |-- README | |-- revoke-full | |-- sign-req | |-- vars | `-- whichopensslcnf |-- keys -> ./easy-rsa/2.0/keys/ `-- server.conf the problem source is cipher AES-128-CBC ,proto tcp-client or UDP or the interface p255p1 on fedora17 or file authentification ta.key is not found ????

    Read the article

  • Visual Studio 2010 Beta 2 Startup Failures

    - by Rick Strahl
    I’ve been working with VS 2010 Beta 2 for a while now and while it works Ok most of the time it seems the environment is very, very fragile when it comes to crashes and installed packages. Specifically I’ve been working just fine for days, then when VS 2010 crashes it will not re-start. Instead I get the good old Application cannot start dialog: Other failures I’ve seen bring forth other just as useful dialogs with information overload like Operation cannot be performed which for me specifically happens when trying to compile any project. After a bit of digging around and a post to Microsoft Connect the solution boils down to resetting the VS.NET environment. The Application Cannot Start issue stems from a package load failure of some sort, so the work around for this is typically: c:\program files\Visual Studio 2010\Common7\IDE\devenv.exe /ResetSkipPkgs In most cases that should do the trick. If it doesn’t and the error doesn’t go away the more drastic: c:\program files\Visual Studio 2010\Common7\IDE\devenv.exe /ResetSettings is required which resets all settings in VS to its installation defaults. Between these two I’ve always been able to get VS to startup and run properly. BTW it’s handy to keep a list of command line options for Visual Studio around: http://msdn.microsoft.com/en-us/library/xee0c8y7%28VS.100%29.aspx Note that the /? option in VS 2010 doesn’t display all the options available but rather displays the ‘demo version’ message instead, so the above should be helpful. Also note that unless you install Visual C++ the Visual Studio Command Prompt icon is not automatically installed so you may have to navigate manually to the appropriate folder above. Cannot Build Failures If you get the Cannot compile error dialog, there is another thing that have worked for me: Change your project build target from Debug to Release (or whatever – just change it) and compile again. If that doesn’t work doing the reset steps above will do it for me. It appears this failure comes from some sort of interference of other versions of Visual Studio installed on the system and running another version first. Resetting the build target explicitly seems to reset the build providers to a normalized state so that things work in many cases. But not all. Worst case – resetting settings will do it. The bottom line for working in VS 2010 has been – don’t get too attached to your custom settings as they will get blown away quite a bit. I’ve probably been through 20 or more of these VS resets although I’ve been working with it quite a bit on an internal project. It’s kind of frustrating to see this kind of high level instability in a Beta 2 product which is supposedly the last public beta they will put out. On the other hand this beta has been otherwise rather stable and performance is roughly equivalent to VS 2008. Although I mention the crash above – crashes I’ve seen have been relatively rare and no more frequent than in VS 2008 it seems. Given the drastic UI changes in VS 2010 (using WPF for the shell and editor) I’m actually impressed that the product is as stable as it is at this point. Also I was seriously worried about text quality going to a WPF model, but thankfully WPF 4.0 addresses the blurry text issue with native font rendering to render text on non-cleartype enabled systems crisply. Anyway I hope that these notes are helpful to some of you playing around with the beta and running into problems. Hopefully you won’t need them :-}© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • need help with automating a CMD java tool whcih qurries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process ? the java tool runs from the CMD with the following: C:java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)472906(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • need help with automating a CMD java tool which queries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process? The java tool runs from the CMD with the following: C:\>java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:\>java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)**472906**(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • Mac 10.5 Python libsvm 64 bit vs 32 bit

    - by shadowsoul
    I have a Mac 10.5 when I type "python" in terminal, it says Enthought Python Distribution -- www.enthought.com Version: 7.3-2 (64-bit) Python 2.7.3 |EPD 7.3-2 (64-bit)| (default, Apr 12 2012, 11:14:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "credits", "demo" or "enthought" for more information. then I go to my libsvm/python folder and type "make" which results in make -C .. lib if [ "Darwin" = "Darwin" ]; then \ SHARED_LIB_FLAG="-dynamiclib -W1,-install_name,libsvm.so.2"; \ else \ SHARED_LIB_FLAG="-shared -W1,-soname,libsvm.so.2"; \ fi; \ g++ ${SHARED_LIB_FLAG} svm.o -o libsvm.so.2 when I try to do "from svmutil import *" I get the error: OSError: dlopen(.../libsvm-3.12/python/../libsvm.so.2, 6): no suitable image found. Did find: .../libsvm-3.12/python/../libsvm.so.2: mach-o, but wrong architecture when I do "lipo -info libsvm.so.2", I get: Non-fat file: libsvm.so.2 is architecture: i386 So it looks like I'm running 64-bit python but libsvm ends up as a 32-bit program. Any way I can get it to compile as a 64-bit program?

    Read the article

  • SLES AutoYaST Script Validity Verification

    - by Xerxes
    Does anyone here write their own customized AutoYaST scripts for building SLES servers? I'm not talking about generating them with yast2 autoyast. If so, have you found a way to verify the syntax? xmllint is good as far as telling you that the XML syntax is valid, but with an upto date DTD, it can't tell you anything more, and the shipped DTDs are out-of-date. I've opened a ticket with Novell on this, but who knows when and what I'll hear back.

    Read the article

  • FIX: Visual Studio Post Build Event Returns &ndash;1 when it should not.

    - by ChrisD
    I had written a Console Application that I run as part of my post build for other projects..  The Console application logs a series of messages to the console as it executes.  I use the Environment.ExitCode value to specify an error or success condition.  When the application executes without issue, the ExitCode is 0, when there is a problem its –1. As part of my logging, I log the value of the exit code right before the application terminates.  When I run this executable from the command line, it behaves as it should; error scenarios return –1 and success scenarios return 0.   When I run the same command line as part of the post-build event, Visual Studio reports the exit code as –1, even when the application reports the exit code as 0.   A snippet of the build output follows: Verbose: Exiting with ExitCode=0 C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073: The command ""MGC.exe" "-TargetPath=C:\TFS\Solutions\Research\Source\Framework\Services\Identity\STS\_STSBuilder\bin\Debug\_STSBuilder.dll" C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073:  C:\Windows\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(3397,13): error MSB3073: " exited with code -1. The Application returns a 0 exit code.  But visual studio is reporting an error.  Why? The answer is in the way I format my log messages.  Apparently Visual Studio watches the messages that get streamed to the the output console.  If those messages match a pattern used by visual studio to communicate errors, Visual Studio assumes an error has occurred in the executable and returns a –1.  This post details the formats used by Visual Studio to determine error conditions. In my case, the presence of the colon was tripping up Visual studio.  I Replaced all occurrences of colon with an equal sign and Visual Studio once again respected the exit code of the application. Verbose= Exiting with ExitCode=0 ========== Build: 3 succeeded or up-to-date, 0 failed, 0 skipped ==========

    Read the article

  • How to install svn 1.8.5 with neon on Mavericks?

    - by Alex
    Does anyone of you installed svn 1.8.* together with neon on OS X Mavericks? I followed this tutorial: http://jason.pureconcepts.net/2012/10/updating-svn-mac-os-x/ But after trying to configure svn to use neon: ./configure --prefix=/usr/local --with-neon I get this warning: configure: WARNING: unrecognized options: --with-neon Building and installation work fine after this, but of course I can not connect to WEBDAV repositories.

    Read the article

  • CruiseControl.Net - not able to see Project Statistics

    - by Anders Juul
    Hi all, I've made a reinstallation of the buildserver and can no longer see the standard graphs of project statistics. The error message shown is "Missing/Invalid statistics reports. Please check if you have enabled the Statistics Publisher, and statistics have been collected atleast once after that." To the best of my knowledge, the ccnet.config file has not been changed in this respect and by inspection it is verified that I have a Statistics / statisticsList-section for the project. Furthermore, the values appear in the Artifacts\statistics.csv and Artifacts\report.xml files. My guess would then be StatisticsGraph.xslt, which I have copied fresh from distribution to both Server\xlst and WebDashboard\xslt (why are they located in both places, by the way!?). Rebuild and check - still same error message. Any hints to how to debug this would be appreciated!

    Read the article

  • Reader Poll: Are You Going to Buy the New iPad 2?

    - by Jason Fitzpatrick
    Steve Jobs announced the iPad 2 moments ago which will touch off a flurry of new purchases, upgrades, and general Apple-centric muttering and fist shaking. Will you be buying an iPad 2? Photo courtesy of Endgadget’s liveblog coverage of the iPad 2 launch. The first iPad sales exceeded everyones expectations, Apple fans and detractors alike, with a crazy 15 million units moved last year. The new iPad rocks a dual-core processor, a front and rear-facing camera, improved graphics, and a razor thinness (33% thinner than the current model), among other improvements. Are the improvements enough to entice you into buying one? Hit up the poll below to log your vote and then fill in the details in the comments. How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • How much does it cost to make a phone?

    - by geoffreyf67
    I was curious if there are any websites that detail how much it costs to make a phone. Not a cell phone but a landline phone. It seems that the ones with any decent features have always cost $100+ and I'd have thought that the price would have dropped over the years but that doesn't seem to be happening. So I figured I'd look into the cost of making the phones. G-Man

    Read the article

  • Announcing the Winnipeg VS.NET 2012 Community Launch Event!

    - by D'Arcy Lussier
    Back in May 2010 the local Winnipeg technical community got together and put on a launch event for VS.NET 2010. That event was such a good time that we’re doing it again this year for the VS.NET 2012 launch! On December 6th, the Winnipeg .NET User Group is hosting a full day VS.NET 2012 Community Launch Event at the Imax theatre in Portage Place! We have 4 sessions planned covering dev tools, ALM/TFS, web development, and cloud development, presented by Dylan Smith, Tyler Doerksen, and myself. You can get all the details and register on our Eventbrite site: http://wpgvsnet2012launch.eventbrite.ca/ I’ve included the details below as well for convenience: Winnipeg VS.NET 2012 Community Launch Event Join us for a full day of sessions highlighting the new features and capabilities of Visual Studio .NET 2012 and the .NET 4.5 Framework! Hosted by the Winnipeg .NET User Group, this community event is FREE thanks to the generous support from our event sponsors: Imaginet Online Business Systems Prairie Developer Conference Event Details When: Thursday, Decemer 6th from 8:00 AM - 4:00 PM Where: IMAX Theatre, Portage Place Cost: *FREE!* Agenda 8:00 - 9:00 Continental Breakfast and Registration 9:00 - 9:15 Welcome 9:15 - 10:30 End-To-End Application Lifecycle Management with TFS 2012 10:30 - 10:45 Break 10:45 - 12:00 Improving Developer Productivity with Visual Studio 2012 12:00 - 1:00 Lunch Break (Lunch Not Provided) 1:00 - 2:15 Web Development in Visual Studio 2012 and .NET 4.5 2:15 - 2:30  Break 2:30 - 3:45 Microsoft Cloud Development with Azure and Visual Studio 2012 3:45 - 4:00 Prizes and Thanks Session Abstracts End-To-End Application Lifecycle Management with TFS 2012 Dylan Smith, Imaginet In this session we'll walk through the application development lifecycle from end-to-end and see how some of the new capabilities in TFS 2012 help streamline the software delivery process. There are some exciting new capabilities around Agile Project Management, Gathering Feedback, Code Reviews, Unit Testing, Version Control, Storyboarding, etc. During this session we’ll follow a fictional software development team through the process of planning, developing, testing, and deployment focusing on where the new functionality in VS/TFS 2012 fits in to make teams more effective. Improving Developer Productivity with Visual Studio 2012 Dylan Smith, Imaginet Microsoft Visual Studio 2012 enables developers to take full advantage of the capability of Windows using the skills and technologies developers already know and love to deliver exceptional and compelling apps.  Whether working individually or in a small, medium or large development team Visual Studio 2012 sets a new standard for development tools, helping teams deliver superior results for their customers that help set them apart from their competitors.  In this session we’ll walk through new features in Visual Studio 2012 specifically focusing on how these improve Developer Productivity. Web Development in Visual Studio 2012 and .NET 4.5 D’Arcy Lussier, Online Business Systems It’s an exciting time to be a web developer in the Microsoft ecosystem! The launch of Visual Studio 2012 and .NET 4.5 brings new tooling and features, and the ASP.NET team is continually releasing updates for MVC, SignalR, Web API, and other platform features. In this session we’ll take a tour of the new features and technologies available for Microsoft web developers here in 2012! Microsoft Cloud Development with Azure and Visual Studio 2012 Tyler Doerksen, Imaginet Microsoft’s public cloud platform is nearing its third year of public availability, supporting web site/service hosting, storage, relational databases, virtual machines, virtual networks and much more. Windows Azure provides both power and flexibility.  But to capture this power you need to have the right tools!  This session will demonstrate the primary ways you can harness Windows Azure with the .NET platform.  We’ll explain cloud service development, packaging, deployment, testing and show how Visual Studio 2012 with the Windows Azure SDK and other Microsoft tools can be used to develop for and manage Windows Azure.Harness the power of the cloud from the comfort of Visual Studio 2012!

    Read the article

  • Palit GeForce 8800GT 512MB Minimum Power Requirement?

    - by Wesley
    Hi all, I am building a system for a friend. The potential specs are like this so far: ASUS A8N-VM motherboard AMD Athlon 64 3200+ @ 2.0 GHz Any 7200RPM SATA HDD Palit GeForce 8800GT 512MB GDDR3 PCIe One DVD/CD combo drive Creative SB Live! 5.1 sound card I was wondering what wattage of power supply would be able to support this hardware. I had a 350W in mind... would that do? Thanks in advance.

    Read the article

  • Can builds be hidden or stepped through in Apple´s Keynote editor so they don't obstruct other objec

    - by meeeee
    I'm new to keynote and as such have not figured out the correct workflow yet. While the transitions (builds) and actions are nice, how are you supposed to work with several objects if the previously edited objects are still being displayed in the original state, thus obstructing the view on new elements that I would like to display later? Is there a way to step through the builds like in presentation mode?

    Read the article

  • What is the best bang for buck desktop CPU

    - by dev5
    What is the best bang for buck desktop cpu available at the moment. AMD or Intel are both OK. Although slight bias to AMD since i prefer their motherboards. It's for an all round machine, i do a bit of everything, from gaming to web development.

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • recommendation for good chassis (case) for first time PC builder

    - by studiohack
    I've been thinking about building my own machine for some time now, and whenever I look at the PC case market, it seems like cases are a dime-a-dozen. As a result, I'm wondering what cases Super Users would recommend in the areas of ease of use, cable management, cooling, etc...in other words, an all-around case for a first time PC builder. Thanks!

    Read the article

  • Diagnosing PCI issues

    - by dtsazza
    I'm upgrading a PC for a friend, and have run into a problem with upgrading the motherboard. I've been assembling custom PCs for the best part of a decade now, so I'm happy enough with the basics at the very least. The motherboard, CPU and graphics card were all updated at once - after this was done, the machine POSTs but the PCI wireless card, as well as the PCI-E graphics card, do not seem to be recognised at all by the system. No trace of them anywhere in the BIOS, or the POST output, or in Windows. I booted into Linux and ran an lspci which also showed up no sign of them. What is the best step to go about diagnosing this? Is it likely/feasible that the motherboard's PCI bus is just defective and it needs to be RMAed? Are there any other common gotchas that might cause these symptoms? For reference, the components in question are: CPU: Celeron E1400 Motherboard: Gigabyte GA-G31M-ES2L Graphics card: TBC (a low end card from a couple of years ago; worked flawlessly before the mobo change) PCI WNIC: Edimax 7128G Thanks in advance for any help.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >