Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 374/1257 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • java:I am trying to create Shotcut of any abc.exe through java program.

    - by Sanjeev
    I am making an installer in java swing it almost completed only one thing is left to do that is to create desktop shortcut of our software.I do not want to copy software on desktop but I want to create instance of that software like other MS software. How it can be done please help me. I am already copied my software in c:/Program files by using copy directory and I want to create shortcut on desktop .

    Read the article

  • How to Make Ubuntu Play MP3 Files

    - by Trevor Bekolay
    Because of licensing issues, Ubuntu is unable to play MP3s out of the box. We’ll show you how to play MP3s and other restricted file formats in about four mouse clicks. The philosophy behind Ubuntu is that software should be free and accessible to all. Whether MP3 and other file formats are free is unclear in many countries, so Ubuntu does not include software to read these file formats by default. Fortunately, it does include a package that installs the most commonly used file formats all at once, including a Flash plugin for Firefox. Note: These instructions are for Ubuntu 10.04. There are small differences for earlier versions of Ubuntu. Play MP3 Files Open the Ubuntu Software Center, found in the Applications menu.   Click on View and ensure that All Software is selected. Type “restricted extras” into the search box at the top-right. Find the Ubuntu restricted extras package and click Install. Enter your password when prompted. Once the install is complete, close out of Ubuntu Software Center, and you’ll be able to play MP3 files! To confirm this, we’ll open up Rhythmbox, found in the Sound & Video section of the Applications menu. Our test MP3 plays with no problems! Note: If Rhythmbox tells you that MP3 plugins are not installed, close Rhythmbox and reopen it. You should not have to install anything extra through Rhythmbox.   Despite this extra step, playing the most common audio and video file formats – including Flash videos on the internet – is simple. All the software comes installed, you just have to teach them how to read your files. Similar Articles Productive Geek Tips How to Play .OGM Video Files in Windows VistaView Hidden Files and Folders in Ubuntu File BrowserMake Ubuntu Automatically Save Changes to Your SessionInstalling PHP4 and Apache on UbuntuInstalling PHP5 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • URL protocol handlers in basic Ubuntu Desktop

    - by Hibou57
    There was a way to register URL protocol handlers with Gconf, which is now obsolete and there seems to be no way to do the same with DConf (or Gsettings, its recommended wrapper). How do one properly register an URL protocol handlers since DConf? Additionally, something looks strange to me (as I don't understand it), on my Ubuntu 12.04 The protocol apt:// should be handled by the apturl command. It is so with my Opera browser, but only because I added this specific association using the browser's configuration facility. Otherwise, in the rest of the environment: Running xdg-open apt://foo.bar opens elinks (my www-browser alternative). Running gnome-open apt://foo.bar opens the Software?Center. Opening gcong-editor, I see a key /desktop/gnome/url-handlers/apt whose value is apturl "%s" and its enable. This configuration seems to be ignored, which is reasonably expected, as GConf is considered obsolete. Opening dconf-editor, I can't see anything related to URL handlers or protocols in /desktop/gnome It looks a bit messy to my eyes (just teasing with this wording, nothing bad) What's underneath? Side note: I'm looking for something which preferably works even when the full desktop environment is not loaded, like when running an i3wm session with only gsettings-daemon (and other stuffs unrelated to this case) is loaded. Update Another way to “register” a protocol handler is with *.desktop files and their MIME-Type; ex. MimeType=application/<the-protocol>;. I found a /usr/share/applications/ubuntu-software-center.desktop with this content: [Desktop Entry] Name=Ubuntu Software Center GenericName=Software Center Comment=Lets you choose from thousands of applications available for Ubuntu Exec=/usr/bin/software-center %u Icon=softwarecenter Terminal=false Type=Application Categories=PackageManager;GTK;System;Settings; MimeType=application/x-deb;application/x-debian-package;x-scheme-handler/apt; StartupNotify=true X-Ubuntu-Gettext-Domain=software-center Keywords=Sources;PPA;Install;Uninstall;Remove;Purchase;Catalogue;Store; This one explains why gnome-open apt://foo.bar opens the Software?Center instead of apturl. So I installed this apturl.desktop in ~/.local/share/applications: [Desktop Entry] Encoding=UTF-8 Version=1.0 Type=Application Terminal=false Exec=/usr/bin/apturl %u Name=APT-URL Comment=APT-URL handler Icon= Categories=Application;Network; MimeType=x-scheme-handler/apt; After update-desktop-database and even after rebooting, both xdg-open and gnome-open still do the same and ignore this user desktop file, which is usual, should override the other in /usr/share/applications/. May be there is something special with desktop files specifying x-scheme-handler MIME type and they are not handled the usual way. The desktop-file way does not answer the question.

    Read the article

  • Oracle Could Lead In Cloud Business Apps Within Year

    - by Richard Lefebvre
    Below is the reprint from an article, writen by By Pete Barlas, Investor's Business Daily, published on Investorscom: Oracle (ORCL) is all but destined to become the largest seller of cloud business-software applications, analysts say, and perhaps within a year. What that means in the long run is much debated, though, as analysts aren't sure whether pricing competition might cut into profit or what other issues might develop in the fast-emerging cloud software field. But the database leader, which is either No. 1 or 2 to SAP (SAP) in business apps overall, simply has the size and scope to overtake current cloud business-app leader, Salesforce.com (CRM), analysts say. Oracle rolled out its first full suite of cloud applications on June 6. Cloud computing lets companies store data and apps on the Internet "cloud" and access it quickly and easily. The applications run the gamut of customer relationship management software to social networking sites for employees, partners and customers. For longtime software giants like Oracle, the cloud is a big switch. They get the great bulk of revenue from companies and other enterprises buying or licensing software that the customers keep on their own computer systems. Vendors also get annual maintenance fees. Analysts estimate Oracle is taking in a mere $1 billion or so a year from cloud-based software sales and services now. But while that's just a sliver of the company's $37 billion in sales last year, it's already about a third of the total sales for Salesforce, which is expected to end this year with some $3 billion in revenue. Operates In 145 Countries Oracle operates in more than 145 countries vs. about 70 for Salesforce. And Oracle has far more apps than Salesforce. Revenue doesn't equate to profit, but it's inevitable that huge Oracle will become the largest seller of cloud applications, says Trip Chowdhry, an analyst for Global Equities Research. "What Oracle has is global presence," he said. "They have two things driving the revenue: breadth of the offering and breadth of the distribution. You put those applications in those sales reps' hands and you get deployments not in just one country but several countries." At the June 6 event, Oracle CEO Larry Ellison emphasized that his company could and would beat Salesforce.com in head-to-head battles for customers. Oracle makes software to help companies manage such tasks as customer relationships, recruiting, supply chains, projects, finances and more. That range gives it an edge over all rivals, says Michael Fauscette, an analyst for research firm IDC.

    Read the article

  • How quickly to leave contract-to-hire gig where you don't want to be hired? [closed]

    - by nono
    So you move to a big new city with tons of software development opportunity, having taken a six month contract-to-hire job. The company treats you really well and has a good team and work environment. However, the recruiter assured you when offering the gig that it would be a good position in which you can advance your learning from more senior developers (a primary concern of yours) but you're starting to realize that a job recruiter isn't going to understand that the team in question isn't very up on modern software practices (you start to sympathize with this guy and read his post over and over again: http://stackoverflow.com/questions/1586166/career-killer-nhibernate-oop-design-patterns-domain-driven-design-test-driv) and that much of the company's software is very old and very very poorly architected, and the company (like so many others) seems to be only concerned with continually extending the software without investing in any structural improvements. You're absolutely dismayed at how long it takes your team (including) to fulfill simple feature requests (maybe 500-1000% longer than with better designed software that you've worked on in the past), but no one else there seems to think anything of it. You find that the work and the company's business are intensely uninteresting to you, but due to the convoluted design of their various software systems, fulfilling the work will require as much mental engagement as any other development position. You feel a bit naive about not having asked the right questions during your interview process, and for not having anticipated that your team at your former podunk company might possibly be light-years ahead of any team in Big Shiny City, but you know you don't want to stay at this place, and (were it not for your personal, after-hours studying and personal programming efforts) fear that you might actually give a worse interview after completing your 6 months than you did when you started at the place. You read about how hard of a time local companies are having filling their positions with qualified software development candidates. You read all sorts of fabulous sounding job postings online and feel like you're really missing out. In spite of the comfortable environment you feel like you would willingly accept a somewhat more demanding or aggressive lifestyle to feel like you are learning and progressing and producing something meaningful. My questions are: how quickly do you leave and how do you go about giving a polite reason for departing? The contract is written to allow them to "can" you and to allow you to leave with 2 weeks notice. Do you ethically owe the 6 months? Upon taking the position, the company told you they were not interested in candidates who were intending to only stay for 6 months and then leave (you were not intending to bail after 6 months, at that time), so perhaps they might be fine if you split now, knowing that you don't want to stick around for the full time hire?

    Read the article

  • Working for Web using open-source Technologies

    - by anirudha
    As a Web Developer we all have own dream to make a great web application. a great application was built upon high discipline and best practice on the process of development then we can make modification easier in future as if we want. the user feedback also have a matter because they tell us what they want or expected with the application we make day and night. sometime they report a nice story , experience or a problem they got with our application. so that's matter because they telling about our application much more because they use our software and a part of process of future development or next version of application we make. so the Web have a good thing that they updated as soon as possible. in desktop application their is a numbers of trouble client have when they want to use our application. first thing that installation of software never goes right on every system. big company spent a big amount of money to troble these problem the user have with their software.   Web application is nice implementation of application because their is no trouble with installation all have same experience and if something goes wrong patch come soon and no waiting for new version. Chrome even a desktop application [browser] but they automatically update themselves so their is no trouble for user to get next version now hasseles.    Web application development in Microsoft way have their own rule , pattern practice to make better application in less time. the technologies i want to show you here is some great opensource example like MySQL jQuery and ASP.NET MVC a framework based on ASP.NET server side language.   For going to next step we need to show you a list of software you need to have to fully experience this tutorial.   Visual Web Developer 2010 Express Edition  MySQL [open-source RDBMS]   Query [open-source javascript library]   for getting these software you need to pay nothing.   Visual Web Developer can obtained from Microsoft.com/Express or if you are student or Web Developer you are eligible to get the Visual studio professional and many other great software from Microsoft through their Dreamspark or WebsiteSpark programmes.   MySQL is a great Relational Database management software who are freely available from MySQL.com as a database monitorting tool you can use MySQL workbrench who can be freely get from MySQL official website or many other free tool are available for begining development with MySQL   jQuery is a great library for making javascipt development easier and faster.you can obtained jQuery from jQuery.com their official website.

    Read the article

  • Adding more RAM at different speeds? Will it impact performance as much as to make it worse than without adding it?

    - by user1676874
    I got a new laptop with 4GBs of ram, expandable to 8GBs. It has 1 4GB stick DDR3 PC3-12800 at 1600Mhz. I can't seem to find another one exactly the same locally, the closest I've found is 1 4GB stick DDR3 PC3-10600 at 1333Mhz. So my question is, I know they will both run at the slowest speed, so even if I have more available RAM it will become slower. Is the performance loss big enough to make the upgrade not worth the hassle?

    Read the article

  • We are moving to two servers from one server due to performance issues. How do we monitor the value of this change?

    - by MikeN
    We are moving to two servers from one server due to performance issues. We are moving our MySQL DB to its own dedicated server and will keep the original machine as the front end machine running nginx/apache (Django backend.) How do we monitor the value of this change? It is possible our whole site could actually get slower since the MySQL queries will be going out over a secure remote connection instead of locally?

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Change Comes from Within

    - by John K. Hines
    I am in the midst of witnessing a variety of teams moving away from Scrum. Some of them are doing things like replacing Scrum terms with more commonly understood terminology. Mainly they have gone back to using industry standard terms and more traditional processes like the RAPID decision making process. For example: Scrum Master becomes Project Lead. Scrum Team becomes Project Team. Product Owner becomes Stakeholders. I'm actually quite sad to see this happening, but I understand that Scrum is a radical change for most organizations. Teams are slowly but surely moving away from Scrum to a process that non-software engineers can understand and follow. Some could never secure the education or personnel (like a Product Owner) to get the whole team engaged. And many people with decision-making authority do not see the value in Scrum besides task planning and tracking. You see, Scrum cannot be mandated. No one can force a team to be Agile, collaborate, continuously improve, and self-reflect. Agile adoptions must start from a position of mutual trust and willingness to change. And most software teams aren't like that. Here is my personal epiphany from over a year of attempting to promote Agile on a small development team: The desire to embrace Agile methodologies must come from each and every member of the team. If this desire does not exist - if the team is satisfied with its current process, if the team is not motivated to improve, or if the team is afraid of change - the actual demonstration of all the benefits prescribed by Agile and Scrum will take years. I've read some blog posts lately that criticise Scrum for demanding "Big Change Up Front." One's opinion of software methodologies boils down to one's perspective. If you see modern software development as successful, you will advocate for small, incremental changes to how it is done. If you see it as broken, you'll be much more motivated to take risks and try something different. So my question to you is this - is modern software development healthy or in need of dramatic improvement? I can tell you from personal experience that any project that requires exploration, planning, development, stabilisation, and deployment is hard. Trying to make that process better with only a slightly modified approach is a mistake. You will become completely dependent upon the skillset of your team (the only variable you can change). But the difficulty of planned work isn't one of skill. It isn't until you solve the fundamental challenges of communication, collaboration, quality, and efficiency that skill even comes into play. So I advocate for Big Change Up Front. And I advocate for it to happen often until those involved can say, from experience, that it is no longer needed. I hope every engineer has the opportunity to see the benefits of Agile and Scrum on a highly functional team. I'll close with more key learnings that can help with a Scrum adoption: Your leaders must understand Scrum. They must understand software development, its inherent difficulties, and how Scrum helps. If you attempt to adopt Scrum before the understanding is there, your leaders will apply traditional solutions to your problems - often creating more problems. Success should be measured by quality, not revenue. Namely, the value of software to an organization is the revenue it generates minus ongoing support costs. You should identify quality-based metrics that show the effect Agile techniques have on your software. Motivation is everything. I finally understand why so many Agile advocates say you that if you are not on a team using Agile, you should leave and find one. Scrum and especially Agile encompass many elegant solutions to a wide variety of problems. If you are working on a team that has not encountered these problems the the team may never see the value in the solutions.   Having said all that, I'm not giving up on Agile or Scrum. I am convinced it is a better approach for software development. But reality is saying that its adoption is not straightforward and highly subject to disruption. Unless, that is, everyone really, really wants it.

    Read the article

  • The Enterprise is a Curmudgeon

    - by John K. Hines
    Working in an enterprise environment is a unique challenge.  There's a lot more to software development than developing software.  A project lead or Scrum Master has to manage personalities and intra-team politics, has to manage accomplishing the task at hand while creating the opportunities and a reputation for handling desirable future work, has to create a competent, happy team that actually delivers while being careful not to burn bridges or hurt feelings outside the team.  Which makes me feel surprised to read advice like: " The enterprise should figure out what is likely to work best for itself and try to use it." - Ken Schwaber, The Enterprise and Scrum. The enterprises I have experience with are fundamentally unable to be self-reflective.  It's like asking a Roman gladiator if he'd like to carve out a little space in the arena for some silent meditation.  I'm currently wondering how compatible Scrum is with the top-down hierarchy of life in a large organization.  Specifically, manufacturing-mindset, fixed-release, harmony-valuing large organizations.  Now I understand why Agile can be a better fit for companies without much organizational inertia. Recently I've talked with nearly two dozen software professionals and their managers about Scrum and Agile.  I've become convinced that a developer, team, organization, or enterprise can be Agile without using Scrum.  But I'm not sure about what process would be the best fit, in general, for an enterprise that wants to become Agile.  It's possible I should read more than just the introduction to Ken's book. I do feel prepared to answer some of the questions I had asked in a previous post: How can Agile practices (including but not limited to Scrum) be adopted in situations where the highest-placed managers in a company demand software within extremely aggressive deadlines? Answer: In a very limited capacity at the individual level.  The situation here is that the senior management of this company values any software release more than it values developer well-being, end-user experience, or software quality.  Only if the developing organization is given an immediate refactoring opportunity does this sort of development make sense to a person who values sustainable software.   How can Agile practices be adopted by teams that do not perform a continuous cycle of new development, such as those whose sole purpose is to reproduce and debug customer issues? Answer: It depends.  For Scrum in particular, I don't believe Scrum is meant to manage unpredictable work.  While you can easily adopt XP practices for bug fixing, the project-management aspects of Scrum require some predictability.  My question here was meant toward those who want to apply Scrum to non-development teams.  In some cases it works, in others it does not. How can a team measure if its development efforts are both Agile and employ sound engineering practices? Answer: I'm currently leaning toward measuring these independently.  The Agile Principles are a terrific way to measure if a software team is agile.  Sound engineering practices are those practices which help developers meet the principles.  I think Scrum is being mistakenly applied as an engineering practice when it is essentially a project management practice.  In my opinion, XP and Lean are examples of good engineering practices. How can Agile be explained in an accurate way that describes its benefits to sceptical developers and/or revenue-focused non-developers? Answer: Agile techniques will result in higher-quality, lower-cost software development.  This comes primarily from finding defects earlier in the development cycle.  If there are individual developers who do not want to collaborate, write unit tests, or refactor, then these are simply developers who are either working in an area where adding these techniques will not add value (i.e. they are an expert) or they are a developer who is satisfied with the status quo.  In the first case they should be left alone.  In the second case, the results of Agile should be demonstrated by other developers who are willing to receive recognition for their efforts.  It all comes down to individuals, doesn't it?  If you're working in an organization whose Agile adoption consists exclusively of Scrum, consider ways to form individual Agile teams to demonstrate its benefits.  These can even be virtual teams that span people across org-chart boundaries.  Once you can measure real value, whether it's Scrum, Lean, or something else, people will follow.  Even the curmudgeons.

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • How to export SSIS to Microsoft Excel without additional software?

    - by Dr. Zim
    This question is long winded because I have been updating the question over a very long time trying to get SSIS to properly export Excel data. I managed to solve this issue, although not correctly. Aside from someone providing a correct answer, the solution listed in this question is not terrible. The only answer I found was to create a single row named range wide enough for my columns. In the named range put sample data and hide it. SSIS appends the data and reads metadata from the single row (that is close enough for it to drop stuff in it). The data takes the format of the hidden single row. This allows headers, etc. WOW what a pain in the butt. It will take over 450 days of exports to recover the time lost. However, I still love SSIS and will continue to use it because it is still way better than Filemaker LOL. My next attempt will be doing the same thing in the report server. Original question notes: If you are in Sql Server Integrations Services designer and want to export data to an Excel file starting on something other than the first line, lets say the forth line, how do you specify this? I tried going in to the Excel Destination of the Data Flow, changed the AccessMode to OpenRowSet from Variable, then set the variable to "YPlatters$A4:I20000" This fails saying it cannot find the sheet. The sheet is called YPlatters. I thought you could specify (Sheet$)(Starting Cell):(Ending Cell)? Update Apparently in Excel you can select a set of cells and name them with the name box. This allows you to select the name instead of the sheet without the $ dollar sign. Oddly enough, whatever the range you specify, it appends the data to the next row after the range. Oddly, as you add data, it increases the named selection's row count. Another odd thing is the data takes the format of the last line of the range specified. My header rows are bold. If I specify a range that ends with the header row, the data appends to the row below, and makes all the entries bold. if you specify one row lower, it puts a blank line between the header row and the data, but the data is not bold. Another update No matter what I try, SSIS samples the "first row" of the file and sets the metadata according to what it finds. However, if you have sample data that has a value of zero but is formatted as the first row, it treats that column as text and inserts numeric values with a single quote in front ('123.34). I also tried headers that do not reflect the data types of the columns. I tried changing the metadata of the Excel destination, but it always changes it back when I run the project, then fails saying it will truncate data. If I tell it to ignore errors, it imports everything except that column. Several days of several hours a piece later... Another update I tried every combination. A mostly working example is to create the named range starting with the column headers. Format your column headers as you want the data to look as the data takes on this format. In my example, these exist from A4 to E4, which is my defined range. SSIS appends to the row after the defined range, so defining A4 to E68 appends the rows starting at A69. You define the Connection as having the first row contains the field names. It takes on the metadata of the header row, oddly, not the second row, and it guesses at the data type, not the formatted data type of the column, i.e., headers are text, so all my metadata is text. If your headers are bold, so is all of your data. I even tried making a sample data row without success... I don't think anyone actually uses Excel with the default MS SSIS export. If you could define the "insert range" (A5 to E5) with no header row and format those columns (currency, not bold, etc.) without it skipping a row in Excel, this would be very helpful. From what I gather, noone uses SSIS to export Excel without a third party connection manager. Any ideas on how to set this up properly so that data is formatted correctly, i.e., the metadata read from Excel is proper to the real data, and formatting inherits from the first row of data, not the headers in Excel? One last update (July 17, 2009) I got this to work very well. One thing I added to Excel was the IMEX=1 in the Excel connection string: "Excel 8.0;HDR=Yes;IMEX=1". This forces Excel (I think) to look at all rows to see what kind of data is in it. Generally, this does not drop information, say for instance if you have a zip code then about 9 rows down you have a zip+4, Excel without this blanks that field entirely without error. With IMEX=1, it recognizes that Zip is actually a character field instead of numeric. And of course, one more update (August 27, 2009) The IMEX=1 will succeed importing data with missing contents in the first 8 rows, but it will fail exporting data where no data exists. So, have it on your import connection string, but not your export Excel connection string. I have to say, after so much fiddling, it works pretty well.

    Read the article

  • How do the performance characteristics of jQuery selectors differ from those of CSS selectors?

    - by Moss
    I came across Google's Page Speed add-on for Firebug yesterday. The page about using efficient CSS selectors said to not use overqualified selectors, i.e. use #foo instead of div#foo. I thought the latter would be faster but Google's saying otherwise, and who am I to go against that? So that got me wondering if the same applied to jQuery selectors. This page I found the link to on SO says I should use $("div#foo"), which is what I was doing all along, since I thought that things would speed up by limiting the selector to match div elements only. But is it really better than writing $("#foo") like Google's saying for CSS selectors, or do CSS versus jQuery element matching work in different ways and I should stick with $("div#foo")?

    Read the article

  • What are the most popular RSS readers? (software/web apps)

    - by Daniel Magliola
    I'm creating an application that generates RSS feeds that include an <enclosure (for showing an audio player). Since RSS readers are kinda flaky, in my experience, I'd like to test how my feeds look in as many readers as possible. I only use Google Reader. What other RSS readers (websites or installable apps, for Windows, Linux AND Mac) are popular? Which are the ones I must test on? Thanks! Daniel

    Read the article

  • VTD-XML Parsing Performance (speed critical factor). Requesting Feedback/Comments

    - by andreas
    Hello, I am about to use VTD-XML (found at http://vtd-xml.sourceforge.net/) but I am interested in getting real-case usage feedback, by any one that has used the library and has any comments. At the URL (http://vtd-xml.sourceforge.net/) there are benchmarks but if someone has used VTD-XML and has comments FOR it I would like to hear them. Speed is a critical factor in the application and comments after real-case usage, by developers, is what i am looking for. Regards,

    Read the article

  • Why does my performance increase when touching the screen?

    - by Smills
    For some reason my FPS jumps up considerably when I move my mouse around on the screen (on the emulator) while holding the left mouse button. Normally my game is very laggy, but if I touch the screen (and as long as I am moving the mouse around while touching) it goes perfectly smooth. I have tried sleeping for 20ms in the onTouchEvent, but it doesn't appear to make any difference. Here is the code I use in my onTouchEvent: // events when touching the screen public boolean onTouchEvent(MotionEvent event) { int eventaction = event.getAction(); touchX=event.getX(); touchY=event.getY(); switch (eventaction) { case MotionEvent.ACTION_DOWN: { touch=true; } break; case MotionEvent.ACTION_MOVE: { } break; case MotionEvent.ACTION_UP: { touch=false; } break; } /*try { AscentThread.sleep(20); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ return true; } In the logcat log, FPS is the current fps (average of the last 20 frames), touch is whether or not the screen is being touched (from onTouchEvent). What on earth is going on? Has anyone else had this odd behaviour before? Logcat log: 12-21 19:43:26.154: INFO/myActivity(786): FPS: 31.686569159606414 Touch: false 12-21 19:43:27.624: INFO/myActivity(786): FPS: 19.46310293212206 Touch: false 12-21 19:43:29.104: INFO/myActivity(786): FPS: 18.801202175690467 Touch: false 12-21 19:43:30.514: INFO/myActivity(786): FPS: 21.118295877408478 Touch: false 12-21 19:43:31.985: INFO/myActivity(786): FPS: 19.117397812958878 Touch: false 12-21 19:43:33.534: INFO/myActivity(786): FPS: 15.572571858239263 Touch: false 12-21 19:43:34.934: INFO/myActivity(786): FPS: 20.584119901503506 Touch: false 12-21 19:43:36.404: INFO/myActivity(786): FPS: 18.888025905454207 Touch: false 12-21 19:43:37.814: INFO/myActivity(786): FPS: 22.35722329083629 Touch: false 12-21 19:43:39.353: INFO/myActivity(786): FPS: 15.73604859775362 Touch: false 12-21 19:43:40.763: INFO/myActivity(786): FPS: 20.912449882754633 Touch: false 12-21 19:43:42.233: INFO/myActivity(786): FPS: 18.785278388997718 Touch: false 12-21 19:43:43.634: INFO/myActivity(786): FPS: 20.1357397209596 Touch: false 12-21 19:43:45.043: INFO/myActivity(786): FPS: 21.961138432007957 Touch: false 12-21 19:43:46.453: INFO/myActivity(786): FPS: 22.167196852834273 Touch: false 12-21 19:43:47.854: INFO/myActivity(786): FPS: 22.207318228024274 Touch: false 12-21 19:43:49.264: INFO/myActivity(786): FPS: 22.36980559230175 Touch: false 12-21 19:43:50.604: INFO/myActivity(786): FPS: 23.587638823252547 Touch: false 12-21 19:43:52.073: INFO/myActivity(786): FPS: 19.233902040593076 Touch: false 12-21 19:43:53.624: INFO/myActivity(786): FPS: 15.542190150440987 Touch: false 12-21 19:43:55.034: INFO/myActivity(786): FPS: 20.82290063974675 Touch: false 12-21 19:43:56.436: INFO/myActivity(786): FPS: 21.975282007207717 Touch: false 12-21 19:43:57.914: INFO/myActivity(786): FPS: 18.786927284103687 Touch: false 12-21 19:43:59.393: INFO/myActivity(786): FPS: 18.96879004217992 Touch: false 12-21 19:44:00.625: INFO/myActivity(786): FPS: 28.367566618064878 Touch: false 12-21 19:44:02.113: INFO/myActivity(786): FPS: 19.04441528684418 Touch: false 12-21 19:44:03.585: INFO/myActivity(786): FPS: 18.807837511809065 Touch: false 12-21 19:44:04.993: INFO/myActivity(786): FPS: 21.134330284993418 Touch: false 12-21 19:44:06.275: INFO/myActivity(786): FPS: 27.209688764079907 Touch: false 12-21 19:44:07.753: INFO/myActivity(786): FPS: 19.055894653261653 Touch: false 12-21 19:44:09.163: INFO/myActivity(786): FPS: 22.05422794901088 Touch: false 12-21 19:44:10.644: INFO/myActivity(786): FPS: 18.6956805300596 Touch: false 12-21 19:44:12.124: INFO/myActivity(786): FPS: 17.434180581311054 Touch: false 12-21 19:44:13.594: INFO/myActivity(786): FPS: 18.71932038510891 Touch: false 12-21 19:44:14.504: INFO/myActivity(786): FPS: 40.94571503868066 Touch: true 12-21 19:44:14.924: INFO/myActivity(786): FPS: 57.061200121138576 Touch: true 12-21 19:44:15.364: INFO/myActivity(786): FPS: 62.54377946377936 Touch: true 12-21 19:44:15.764: INFO/myActivity(786): FPS: 64.05005071818726 Touch: true 12-21 19:44:16.384: INFO/myActivity(786): FPS: 50.912951172948155 Touch: true 12-21 19:44:16.874: INFO/myActivity(786): FPS: 55.31242053078078 Touch: true 12-21 19:44:17.364: INFO/myActivity(786): FPS: 59.31625410615102 Touch: true 12-21 19:44:18.413: INFO/myActivity(786): FPS: 36.63504170925923 Touch: false 12-21 19:44:19.885: INFO/myActivity(786): FPS: 18.099130467755923 Touch: false 12-21 19:44:21.363: INFO/myActivity(786): FPS: 18.458978222946566 Touch: false 12-21 19:44:22.683: INFO/myActivity(786): FPS: 25.582179409330823 Touch: true 12-21 19:44:23.044: INFO/myActivity(786): FPS: 60.99865521942455 Touch: true 12-21 19:44:23.403: INFO/myActivity(786): FPS: 74.17873975470984 Touch: true 12-21 19:44:23.763: INFO/myActivity(786): FPS: 64.25663040460714 Touch: true 12-21 19:44:24.113: INFO/myActivity(786): FPS: 62.47483457826921 Touch: true 12-21 19:44:24.473: INFO/myActivity(786): FPS: 65.27969529547072 Touch: true 12-21 19:44:24.825: INFO/myActivity(786): FPS: 67.84743115273311 Touch: true 12-21 19:44:25.173: INFO/myActivity(786): FPS: 73.50854551357706 Touch: true 12-21 19:44:25.523: INFO/myActivity(786): FPS: 70.46432534585368 Touch: true 12-21 19:44:25.873: INFO/myActivity(786): FPS: 69.04076953445896 Touch: true

    Read the article

  • Is there CI server software that can do all of this?

    - by SnOrfus
    I'm trying to put together a Continuous Integration server that will do the following: Work with subversion Use NUnit tests (fail build on failed tests) Use partcover (fail build on < X% coverage) Run code against FxCop (fail build on FxCop warnings, given settings) Run code against StyleCop (fail build on StyleCop warnings, given settings) Not as important: Be able to run from a sln file Be able to publish the application (ClickOnce is setup for the project already) I'm using TeamCity right now and it doesn't seem to do 3 or 5, and it doesn't have a runner for the newest NUnit. From the list of plugins that hudson has, it looks like it can do all of these except 3 (and the not as important requests). I've considered writing a plugin for hudons to use partcover, but that's adding more time to setting up a build server.

    Read the article

  • Error installing Sony Remote Play

    - by Iszi Rory or Isznti
    I'm trying to install Remote Play software to connect my laptop to my PS3. I've found a guide with instructions which seem to be in fairly wide use (found similar walk-throughs on numerous other sites), for running the software on a non-Vaio PC. Tech-Recipies: Playstation 3 – Use Remote Play on any Windows 7 PC The setup essentially goes like this: Download Remote Play software. Download patch by NTAuthority. Install Remote Play as normal. Reboot. Extract NTAuthority patch to Remote Play program folder. Manually register patched DLLs via CLI. Run Remote Play software. Sadly, my problem is early in - Step 3. I had to use Google to find the software download, as the link from Tech-Recipies seems broken. I found the download on Sony's site here: Sony eSupport: Remote Play with PlayStation®3 After downloading and running the software, I hit "Next" at the welcome screen and "I Agree" at the EULA screen. After this, a popup informs me that Setup is checking my computer's information. Then, Setup terminates with this error: I'm running Windows 7 Ultimate x64. Is anyone familiar with this error in this software? Is there a way to work around it? Did I perhaps pick the wrong download from Sony's site?

    Read the article

  • Silverlight performance with a large number of Objects Org Chart.

    - by KC
    Hello, I’m working on a org chart project(SL 3) and I’m seeing the UI thread hang when the chart is building around 2,000 nodes and when it renders it takes about a min and then FPS drop to a crawl. Here is the code flow. Page.xaml.cs calls a wcf service that returns a list of AD users. Then we use Linq to build a collection of nodes to bind to the Orgchart.cs OrgChart.cs is a canvas and displays a collection of nodes and connecting lines. Node.cs is a canvas and has user data can contain children nodes. NodeContent.xaml is a user control that has borders so I can set the background, textblocks to display user's data, Events that handle the selected and expaned nodes, and storyboards that resize the nodes when they are selected or expanded.I noticed during hours of debugging , here in the InitializeComponent(); section where it loads the xaml it seems to be where the preformance hit is happening. System.Windows.Application.LoadComponent(this, new System.Uri("/Silverlight.Custom;component/NodeContent.xaml", System.UriKind.Relative)); So I guess I have two questions. Can threading help in any way with the UI thread hanging while drawing the nodes? How can I avoid the hit when calling this user control? Any advice or direction anyone can lend would be greatly appreciated. Thanks, KC

    Read the article

  • Linq Query Performance , comparing Compiled query vs Non-Compiled.

    - by AG.
    Hello Guys, I was wondering if i extract the common where clause query into a common expression would it make my query much faster, if i have say something like 10 linq queries on a collection with exact same 1st part of the where clause. I have done a small example to explain a bit more . public class Person { public string First { get; set; } public string Last { get; set; } public int Age { get; set; } public String Born { get; set; } public string Living { get; set; } } public sealed class PersonDetails : List<Person> { } PersonDetails d = new PersonDetails(); d.Add(new Person() {Age = 29, Born = "Timbuk Tu", First = "Joe", Last = "Bloggs", Living = "London"}); d.Add(new Person() { Age = 29, Born = "Timbuk Tu", First = "Foo", Last = "Bar", Living = "NewYork" }); Expression<Func<Person, bool>> exp = (a) => a.Age == 29; Func<Person, bool> commonQuery = exp.Compile(); var lx = from y in d where commonQuery.Invoke(y) && y.Living == "London" select y; var bx = from y in d where y.Age == 29 && y.Living == "NewYork" select y; Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", lx.Single().Age, lx.Single().First , lx.Single().Last, lx.Single().Living, lx.Single().Born ); Console.WriteLine("All Details {0}, {1}, {2}, {3}, {4}", bx.Single().Age, bx.Single().First, bx.Single().Last, bx.Single().Living, bx.Single().Born); So can some of the guru's here give me some advice if it would be a good practice to write query like var lx = "Linq Expression " or var bx = "Linq Expression" ? Any inputs would be highly appreciated. Thanks, AG

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >