Search Results

Search found 5438 results on 218 pages for 'ide hdd'.

Page 207/218 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Adventures in Lab Management Configuration: Part 2 of 3

    - by Enrique Lima
    The first post was the high level overview. Now it is time for the details on what was done to the existing CMMI Project based on CMMI v 4.2. The first step was to go into Visual Studio, then from the Team Project Collection Settings and then to the Process Template Manager.  Once there, it was a matter of selecting the appropriate template (MSF for CMMI Process Improvement v5.0) and download to a point I could reference later (for example C:\Templates). Then on to using the steps from the guidance post. Since I was using an x64 deployment, I will make reference to the path as <toolpath>, however the actual path to reference in a 64-bit environment is “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE”. As I mentioned on the previous post, make sure to first perform a backup of the Configuration, Collection and Warehouse DBs.  If you did not apply any changes to the names and such, then you will find those as tfs_Configuration, tfs_DefaultCollection and tfs_Warehouse. Now, the work needed with the witadmin tool: That includes the uploading of the structures that differ from v4.2 to v5.0 There is likely going to be an issue with the naming of some fields. For example, TFS 2010 likes something along the lines of “Area ID”, whereas TFS 2008 would have had it as “AreaID”.  So, this will need to be corrected.  Some posts will have you go through this after the errors pop up.  I would recommend doing this process prior to executing the importwitd process.  witadmin listfields /collection:<path to collection> > c:\ListFields.txt Review the following fields: AreaID, review the Name property and validate if it states “AreaID”, the you will need to rename the Name field to reflect “Area ID”. ExternalLinkCount, RelatedLinkCount, HyperLinkCount, AttachedFileCount and IterationID would be the other fields to check. To correct the issue, then execute the following: witadmin changefield /collection:<path to collection> /n:"System.ExternalLinkCount" /name:"External Link Count" Repeat for Area ID, Related Link Count, Hyperlink Count, Attached File Count and Iteration ID.  Once this is done, proceed with the commands below. witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\TestCase.xml" witadmin importwitd /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\TypeDefinitions\SharedStep.xml" witadmin importcategories /collection:<path to collection> /p:<project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\WorkItem Tracking\categories.xml" Modifications to the Bug Definition: First step is to export the existing definition. witadmin exportwitd /collection<path to collection> /p:<project> /n:bug /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Make modifications to recently exported MyBug.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyBug.xml" Repeat the process for the the Scenario or Requirement Type Definition witadmin exportwitd /collection<path to collection> /p:<project> /n:requirement /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Make modifications to recently exported MyRequirement.xml file.  Details for the modification are here:  http://msdn.microsoft.com/en-us/library/ff452591.aspx#ModifyTask Once the changes are done, proceed with the import command witadmin importwitd /collection:<path to collection> /p: <project> /f:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\MyRequirement.xml" Provide the Bug Field Mapping definition, after creating the file as specified here: http://msdn.microsoft.com/en-us/library/ff452591.aspx#TCMBugFieldMapping tcm bugfieldmapping /import /mappingfile:"<path to downloaded template>\MSF for CMMI Process Improvement v5.0\bugfieldmappings.xml" /collection:<path to collection> /teamproject:<project name>

    Read the article

  • Day 1 - Finding Like Minds

    - by dapostolov
    So, is being a Game Developer any different from being an IT Developer? I picture a poorly lit environment where I get to purchase my own desk lamp; I'm thinking one of those huge lava lamps that pump out so much heat you could fry an egg on it. To my right: a "great wall" of empty coke cans dwarf me. Eating my last slice of pizza I look across my desk to see a fellow developer with a smug look on his face;  he's just coded his latest module for the game and it looks like he's in nirvana. My duty, of course, is to remind him to keep focused on the job at hand. So, picking up my trusty elastic and aerodynamically crafted paper bullet I begin a 10 minute war of welts and laughter which is promptly abrupted by our Project Manager demanding more details from our morning Scrum meeting. After providing about 5 minutes of geek speak and several words of comfort to make his eyes glaze over...it hits me, the idea for the module...beckoning my developer friend over, we quickly shoo the Project Manager away and begin our brainstorming frenzy ... now, where'd I put that full can of coke? OK. OK. This isn't probably the most ideal game developer environment, but it definitely sounds fun to me...and from what I gather is nothing like most game development companies. But I'm not doing this blog series to "go pro"; like I stated in my first post I want to make a 2D game from an idea my best friend and I drummed up long, long ago. I'm in this for the passion AND I want to see how easy it is for us .Net Developers to create a game. So where do I start? Where can I find like minded individuals? What technologies are there? What do I need to make a video game? The questions are endless....AND...since I already have an idea ... lets start with ... Technology (yes, I'm a geek, live with it...) Technology OK. Predominantly, games are still made in C++ or even C. I'm not sure how much assembly code is floating around lately, however, that is not my concern. I do know C / C++ from my past, enough to even get me by, but I'm mainly interested in a recent, not-so-new, technology called XNA. What is XNA? XNA allows us .Net Developers to make 2D / 3D games for windows, Xbox*, and Windows Mobile 7*. * = for a nominal fee *cough* The following link is your one stop shop to XNA game development: http://creators.xna.com/en-US/education/gettingstarted The above site hosts information such as: - getting started - a sample/instructional shooter game in 2D / 3D with code (if I'm taking too long for you in this blog series) - downloads - starter kits... http://creators.xna.com/en-US/education/starterkits/ And of course...forums. You can also subscribe and pay for their premium membership which gets you some pretty awesome tutorials, resources, downloads, and premium community support. Some general Wiki information about XNA: http://en.wikipedia.org/wiki/XNA_%28Microsoft%29 Community Support OK. Let's move on to industry and community support. Apart from XNA, there are some really cool sites out there, I just haven't found all of them yet. However, I found a really cool Game Development website called Gamastura. You can click on the following link to get you there: http://www.gamasutra.com/ The site is 100% dedicated to "The Art & Business of Making Games". Armed with blogs, twitter, jobs/resumes and most importantly industry news; one could subscribe to the feed and got lost in the wealth of information it provides. On a side note: I remember Gamasutra being around when my best friend and I wanted to make a video game...meaning, they've been around for a while now. I think the most beneficial aspect of this site is to understand the industry you want to get into. Otherwise, it's just a cool site to keep up to date with the industry in general. Another Community Support option is LinkedIn. Amongst the land of extremely bloated achievements and responsibilities lay 3 groups (that I have found) that deal with game development.: http://www.linkedin.com/groups?gid=59205 - Game Developers http://www.linkedin.com/groups?gid=824817 - DirectX Game Developer Network http://www.linkedin.com/groups?gid=756587 - DirectX Developers The Game Developers group in LinkedIn is by far the most active of the three and could possibly provide a wealth of support. What I've done thus far: - I lightly researched the XNA technology - I looked around for some community sites to assist me - I downloaded the XNA Game Studio 3.1 on my PC and installed it on my IDE - I even tried both tutorials! http://creators.xna.com/en-US/education/gettingstarted/bgintro/chapter1   Best Regards D.

    Read the article

  • Writing an ASP.Net Web based TFS Client

    - by Glav
    So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind. We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association. However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar. Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012) Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work. Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario. So for anyone else who wants to do this, here is a simple breakdown on what you have to do: Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access. Without this step, don’t bother trying to do anything else. In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”: Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.VersionControl.Client.dll Microsoft.TeamFoundation.VersionControl.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll Microsoft.TeamFoundation.WorkItemTracking.Common.dll If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support. You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below: <appSettings> <!-- Add reference to TFS Client Cache --> <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" /> </appSettings> With all that in place, you can write the following code: var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}"); var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token); var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}.visualstudio.com/defaultcollection”), clientCreds); TfsConfigurationServercurrentCollection.EnsureAuthenticated(); In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here. Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message. And that is it. You can then query the collection using something like: var service = currentCollection.GetService<WorkItemStore>(); var proj = service.Projects[0]; var allQueries = proj.StoredQueries; for (int qcnt = 0; qcnt < allQueries.Count; qcnt++) {     var query = allQueries[qcnt];     var queryDesc = string.format(“Query found named: {0}”,query.Name); } You get the idea. If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce. So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • CD/DVD drive not mounted when inserted with Disc of any kind

    - by Cisco Sán
    I just noticed that if a insert a CD or a DVD of any kind, the Drive will start spinning but it will not show the mounted disc. Before it used to ask me what to do with the media inserted. Now it doesn't even do that. I ran in the terminal this code: eject -n and it displays this: " eject: device is `/dev/sr0'" what can I do to get the functionality back on my drive. also ran this command: sudo mount -o ro,unhide,uid=1000 /dev/cdrom /mnt/cdrom but in return i get this: " mount: mount point /mnt/cdrom does not exist" Running Ubuntu 11.10 HERE IS THE HISTORY UNTIL NOW thanks Waltinator: I ran the 'dmesg' but don't know what I'm looking for. Im a newbie on this. The same thing with the 'ls -rlt /var/log' command. Should I create the directory for the mount? at this point really don't know what to do. – Cisco Sán 7 hours ago Here are 3 lines from my dmesg after I successfully inserted a CD: [ 4804.416018] wlan0: no IPv6 routers present [ 8214.125450] ISdit ISO 9660 Extensions: Microsoft Joliet Level 3 [ 8214.136556] ISO 9660 Extensions: RRIP_1991A The first line is a previous event, my wireless going online. The next 2 lines are a good result. The number in square brackets is "seconds since boot", the rest of the line is usually helpful. And no, you should NOT create the mount point. Let's try to get the automatic mounting to work. – waltinator 7 hours ago ok this are my last 3 lines on the 'dmesg' [ 18.130819] init: plymouth-stop pre-start process (1396) terminated with status 1 [ 28.780011] wlan0: no IPv6 routers present [ 505.632119] CE: hpet increased min_delta_ns to 20113 nsec – Cisco Sán 6 hours ago It looks like your CD/DVD drive is not connected to the data bus, and not causing an interrupt when you insert a platter. – waltinator 6 hours ago Try dmesg | grep -A8 CD-ROM which should show you what the system thought was available when it came up. – waltinator 6 hours ago here is my printout [0.774351] scsi 0:0:0:0: CD-ROM HL-DT-ST DVD+-RW GSA-T40N A100 PQ: 0 ANSI: 5 [0.778117] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [0.778122] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.778282] sr 0:0:0:0: Attached scsi CD-ROM sr0 [0.778340] sr 0:0:0:0: Attached scsi generic sg0 type 5 [0.780416] Freeing unused kernel memory: 984k freed [0.780732] Write protecting the kernel read-only data: 10240k [0.780986] Freeing unused kernel memory: 20k freed [0.786331] Freeing unused kernel memory: 1400k freed [0.804912] udevd[90]: starting version 173 [0.874178] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [0.874208] r8169 0000:02:00.0: PCI INT A - GSI 16 (level, low) - IRQ 16 OK, your system sees the drive. Can you open and close the tray with eject and eject -t? Run udevadm monitor while you insert a CD (type ^C when done) and see if you get "change" and "add" messages. – waltinator 6 hours ago ok, "eject" works perfectly "eject -t" does nothing. this is the message for "udevadm monitor": KERNEL[13771.009267] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0/block/sr0 (block) UDEV [13773.878887] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0 /block/sr0 (block) – Cisco Sán 6 hours ago sudo hwinfo --cdrom (the hwinfo package is installable through Software Center) describes my CD-ROM, try it. – waltinator 4 hours ago My read out from the "sudo hwinfo --cdrom" are the following: hal.1: read hal dataprocess 2753: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 280. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 22: SCSI 00.0: 10602 CD-ROM (DVD) [Created at block.247] Unique ID: KD9E.JgkxTS4hgl2 Parent ID: 3p2J.gdUMCD83e+E SysFS ID: /class/block/sr0 SysFS BusID: 0:0:0:0 SysFS Device Link: /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0 Hardware Class: cdrom Model: "HL-DT-ST DVD+-RW GSA-T40N" Vendor: "HL-DT-ST" Device: "DVD+-RW GSA-T40N" Revision: "A100" Driver: "ata_piix", "sr" Driver Modules: "ata_piix" Device File: /dev/sr0 (/dev/sg0) Device Files: /dev/sr0, /dev/scd0, /dev/disk/by-id/ata-HL-DT-ST_DVD+_-RW_GSA-T40N_K048BJ74257, /dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0, /dev/cdrom, /dev/cdrw, /dev/dvd, /dev/dvdrw Device Number: block 11:0 (char 21:0) Features: DVD Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (IDE interface) Drive Speed: 31 Volume ID: "Movie" Publisher: "INTERVIDEO" Creation date: "20050424162207000" Thanks for the help. To Castro, hope this is what you meant and sorry for the comments..

    Read the article

  • SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

    - by Brian
    Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11. The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth. The SPARC T4-4 server operated on an Oracle OLAP cube with a 4 billion row fact table of sales data containing 4 dimensions. This represents as many as 90 quintillion aggregate rows (90 followed by 18 zeros). Performance Landscape Oracle OLAP Perf Version 2 Benchmark 4 Billion Fact Table Rows System Queries/hour Users* Response Time (sec) Average Median SPARC T4-4 430,000 7,300 0.85 0.43 * Users - the supported number of users with a given think time of 60 seconds Configuration Summary and Results Hardware Configuration: SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 1 TB memory Data Storage 1 x Sun Fire X4275 (using COMSTAR) 2 x Sun Storage F5100 Flash Array (each with 80 FMODs) Redo Storage 1 x Sun Fire X4275 (using COMSTAR with 8 HDD) Software Configuration: Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option Benchmark Description The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing. The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube consisting of a number of years of sales data with fully pre-computed aggregations. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc. Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix. Key Points and Best Practices Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here. For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law N = (rt + tt) * tp where rt is the average response time, tt is the think time and tp is the measured throughput. Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds. For more information see chapter 3 from the book "Quantitative System Performance" cited below. -- See Also Quantitative System Performance Computer System Analysis Using Queueing Network Models Edward D. Lazowska, John Zahorjan, G. Scott Graham, Kenneth C. Sevcik external local Oracle Database 11g – Oracle OLAP oracle.com OTN SPARC T4-4 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Seven Random Thoughts on JavaOne

    - by HecklerMark
    As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways: The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the best...so if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-) JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es). And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good. If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did. All the best,Mark P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

  • Code refactoring with Visual Studio 2010 Part-4

    - by Jalpesh P. Vadgama
    I have been writing few post with code refactoring features in Visual Studio 2010. This post also will be part of series and this post will be last of the series. In this post I am going explain two features 1) Encapsulate Field and 2) Extract Interface. Let’s explore both features in details. Encapsulate Field: This is a nice code refactoring feature provides by Visual Studio 2010. With help of this feature we can create properties from the existing private field of the class. Let’s take a simple example of Customer Class. In that I there are two private field called firstName and lastName. Below is the code for the class. public class Customer { private string firstName; private string lastName; public string Address { get; set; } public string City { get; set; } } Now lets encapsulate first field firstName with Encapsulate feature. So first select that field and goto refactor menu in Visual Studio 2010 and click on Encapsulate Field. Once you click that a dialog box will appear like following. Now once you click OK a preview dialog box will open as we have selected preview reference changes. I think its a good options to check that option to preview code that is being changed by IDE itself. Dialog will look like following. Once you click apply it create a new property called FirstName. Same way I have done for the lastName and now my customer class code look like following. public class Customer { private string firstName; public string FirstName { get { return firstName; } set { firstName = value; } } private string lastName; public string LastName { get { return lastName; } set { lastName = value; } } public string Address { get; set; } public string City { get; set; } } So you can see that its very easy to create properties with existing fields and you don’t have to change anything there in code it will change all the stuff itself. Extract Interface: When you are writing software prototype and You don’t know the future implementation of that then its a good practice to use interface there. I am going to explain here that How we can extract interface from the existing code without writing a single line of code with the help of code refactoring feature of Visual Studio 2010. For that I have create a Simple Repository class called CustomerRepository with three methods like following. public class CustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } In above class there are three method Add,Update and Delete where we are going to implement some code for each one. Now I want to create a interface which I can use for my other entities in project. So let’s create a interface from the above class with the help of Visual Studio 2010. So first select class and goto refactor menu and click Extract Interface. It will open up dialog box like following. Here I have selected all the method for interface and Once I click OK then it will create a new file called ICustomerRespository where it has created a interface. Just like following. Here is a code for that interface. using System; namespace CodeRefractoring { interface ICustomerRespository { void Add(); void Delete(); void Update(); } } Now let's see the code for the our class. It will also changed like following to implement the interface. public class CustomerRespository : ICustomerRespository { public void Add() { // Some code to add customer } public void Update() { //some code to update customer } public void Delete() { //some code delete customer } } Isn't that great we have created a interface and implemented it without writing a single line of code. Hope you liked it. Stay tuned for more.. Till that Happy Programming.

    Read the article

  • Battery life starts at 2:30 hrs (99%), but less than 1 minute later is only 1:30 hrs (99%)

    - by zondu
    After searching this and other forums, I haven't seen this same issue listed anywhere for Ubuntu 12. Prior to installing Ubuntu 12.10, my Netbook (Acer AspireOne D250, SATA HDD) was consistently getting 2:30-3 hrs battery life under Windows XP Home, SP3. However, immediately after installing Ubuntu 12.10, the battery life starts out at 2:30 hrs (99%), but less than 1 minute later suddenly drops to 1:30 hrs (99%), which seems very odd. It could be a complete coincidence that the battery is suddenly flaky at the exact same moment that Ubuntu 12.10 was installed, but that doesn't seem likely. I'm a newbie to Ubuntu, so I don't have much experience tweaking/trouble-shooting yet. Here's what I've tried so far: enabled laptop mode (sudo su, then echo 5 /proc/sys/vm/laptop_mode) and checked that it is running when the A/C adapter is unplugged, but it doesn't seem to have made any noticeable difference in battery life, installed Jupiter, but it didn't work and messed up the system, so I had to uninstall it, disabled bluetooth (wifi is still on b/c it is necessary), set the screen to lowest brightness, etc., run through at least 1 full power cycle (running until the netbook shut itself off due to critical battery) and have been using it normally (sometimes plugged in, often unplugged until the battery gets very low) for a week since installing Ubuntu 12.10. installed powertop, but have no idea how to interpret its results. Here are the results of acpi -b: w/ A/C adapter: Battery 0: Full, 100% immediately after unplugging: Battery 0: Discharging, 99%, 02:30:20 remaining 1 minute after unplugging: Battery 0: Discharging, 99%, 01:37:49 remaining 2-3 minutes after unplugging: Battery 0: Discharging, 95%, 01:33:01 remaining 10 minutes after unplugging: Battery 0: Discharging, 85%, 01:13:38 remaining Results of cat /sys/class/power_supply/BAT0/uevent: w/ A/C adapter: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Full POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=12136000 POWER_SUPPLY_CURRENT_NOW=773000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1956000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= immediately after unplugging: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11886000 POWER_SUPPLY_CURRENT_NOW=773000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1937000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 1 minute later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11728000 POWER_SUPPLY_CURRENT_NOW=1174000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1937000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 2-3 minutes later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11583000 POWER_SUPPLY_CURRENT_NOW=1209000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1878000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 10 minutes later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11230000 POWER_SUPPLY_CURRENT_NOW=1239000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1644000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= Results of upower -i /org/freedesktop/UPower/devices/battery_BAT0: w/ A/C adapter: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:24:58 2012 (823 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: fully-charged energy: 21.1248 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 8.3484 W voltage: 12.173 V percentage: 100% capacity: 43.4667% technology: lithium-ion immediately after unplugging: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:41:25 2012 (1 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.9196 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 8.3484 W voltage: 11.86 V time to empty: 2.5 hours percentage: 99.0286% capacity: 43.4667% technology: lithium-ion History (charge): 1354023683 99.029 discharging 1 minute later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:42:31 2012 (17 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.9196 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.5432 W voltage: 11.753 V time to empty: 1.5 hours percentage: 99.0286% capacity: 43.4667% technology: lithium-ion History (charge): 1354023683 99.029 discharging History (rate): 1354023751 13.543 discharging 2-3 minutes later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:45:06 2012 (20 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.2824 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.7484 W voltage: 11.545 V time to empty: 1.5 hours percentage: 96.0123% capacity: 43.4667% technology: lithium-ion History (charge): 1354023906 96.012 discharging 1354023844 97.035 discharging History (rate): 1354023906 13.748 discharging 1354023875 12.992 discharging 1354023844 13.284 discharging 10 minutes later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:54:24 2012 (28 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 18.1764 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.2948 W voltage: 11.268 V time to empty: 1.4 hours percentage: 86.0429% capacity: 43.4667% technology: lithium-ion History (charge): 1354024433 86.043 discharging History (rate): 1354024464 13.295 discharging 1354024433 13.662 discharging 1354024402 13.781 discharging I noticed that between #2 and #3 (0 and 1 minutes after unplugging), while the battery still reports 99% charge and drops from 2:30 hr to 1:30 hr, the energy usage goes from 8.34 W to 13.54 W and the current_now increases, but shouldn't it be using less energy in battery mode since the screen is much dimmer and it's in power saving mode? (or is that normal behavior?) It also seems to drain more quickly than what it predicts, especially with the 1-1.25 hour drop in the first minute of being unplugged, which seems odd. What really concerns me is that Ubuntu 12.10 may not be properly managing the battery (with the sudden change in charge/life from 2:30 to 1:30 or 1:15 within a minute of unplugging), and that a new battery may quickly die under Ubuntu 12.10. I'd greatly appreciate any advice/suggestions on what to do, and especially whether there's a way to get back the 1-1.5 hrs of battery life that were suddenly lost when changing from WinXp to Ubuntu 12.10. Thanks :)

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Exiting a reboot loop

    - by user12617035
    If you're in a situation where the system is panic'ing during boot, you can use # boot net -s to regain control of your system. In my case, I'd added some diagnostic code to a (PCI) driver (that is used to boot the root filesystem). There was a bug in the driver, and each time during boot, the bug occurred, and so caused the system to panic: ... 000000000180b950 genunix:vfs_mountroot+60 (800, 200, 0, 185d400, 1883000, 18aec00) %l0-3: 0000000000001770 0000000000000640 0000000001814000 00000000000008fc %l4-7: 0000000001833c00 00000000018b1000 0000000000000600 0000000000000200 000000000180ba10 genunix:main+98 (18141a0, 1013800, 18362c0, 18ab800, 180e000, 1814000) %l0-3: 0000000070002000 0000000000000001 000000000180c000 000000000180e000 %l4-7: 0000000000000001 0000000001074800 0000000000000060 0000000000000000 skipping system dump - no dump device configured rebooting... If you're logged in via the console, you can send a BREAK sequence in order to gain control of the firmware's (OBP's) prompt. Enter Ctrl-Shift-[ in order to get the TELNET prompt. Once telnet has control, enter this: telnet> send brk You'll be presented with OBP's prompt: ok You then enter the following in order to boot into single-user mode via the network: ok boot net -s Note that booting from the network under Solaris will implicitly cause the system to be INSTALLED with whatever software had last been configured to be installed. However, we are using boot net -s as a "handle" with which to get at the Solaris prompt. Once at that prompt, we can perform actions as root that will let us back out our buggy driver (ok... MY buggy driver :-)) ...and replace it with the original, non-buggy driver. Entering the boot command caused the following output, as well as left us at the Solaris prompt (in single-user-mode): Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #53463393. Ethernet address 0:3:ba:2f:c9:61, Host ID: 832fc961. Rebooting with command: boot net -s Boot device: /pci@1f,700000/network@2 File and args: -s 1000 Mbps FDX Link up Timeout waiting for ARP/RARP packet Timeout waiting for ARP/RARP packet 4000 1000 Mbps FDX Link up Requesting Internet address for 0:3:ba:2f:c9:61 SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Booting to milestone "milestone/single-user:default". Configuring devices. Using RPC Bootparams for network configuration information. Attempting to configure interface bge0... Configured interface bge0 Requesting System Maintenance Mode SINGLE USER MODE # Our goal is to now move to the directory containing the buggy driver and replace it with the original driver (that we had saved away before ever loading our buggy driver! :-) However, since we booted from the network, the root filesystem ("/") is NOT mounted on one of our local disks. It is mounted on an NFS filesystem exported by our install server. To verify this, enter the following command: # mount | head -1 / on my-server:/export/install/media/s10u2/solarisdvd.s10s_u2dvd/latest/Solaris_10/Tools/Boot remote/read/write/setuid/devices/dev=4ac0001 on Wed Dec 31 16:00:00 1969 As a result, we have to create a temporary mount point and then mount the local disk onto that mount point: # mkdir /tmp/mnt # mount /dev/dsk/c0t0d0s0 /tmp/mnt Note that your system will not necessarily have had its root filesystem on "c0t0d0s0". This is something that you should also have recorded before you ever loaded your.. er... "my" buggy driver! :-) One can find the local disk mounted under the root filesystem by entering: # df -k / Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t0d0s0 76703839 4035535 71901266 6% / To continue with our example, we can now move to the directory of buggy-driver in order to replace it with the original driver. Note that /tmp/mnt is prefixed to the path of where we'd "normally" find the driver: # cd /tmp/mnt/platform/sun4u/kernel/drv/sparcv9 # ls -l pci\* -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch.aar -rwxr-xr-x 1 root sys 211616 Jun 8 2006 pcisch.orig # cp -p pcisch.orig pcisch We can now synchronize any in-memory filesystem data structures with those on disk... and then reboot. The system will then boot correctly... as expected: # sync;sync # reboot syncing file systems... done Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #xxxxxxxx. Ethernet address 0:3:ba:2f:c9:61, Host ID: yyyyyyyy. Rebooting with command: boot Boot device: /pci@1e,600000/ide@d/disk@0,0:a File and args: SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-host NIS domain name is my-campus.Central.Sun.COM my-host console login: ...so that's how it's done! Of course, the easier way is to never write a buggy-driver... but.. then.. we all "have an eraser on the end of each of our pencils"... don't we ? :-) "...thank you... and good night..."

    Read the article

  • Problems upgrading VB.Net 2008 project into VS2010

    - by Brett Rigby
    Hi there, I have been upgrading several different VS2008 projects into VS2010 and have found a problem with VB.Net projects when they are converted. Once converted, the .vbproj files have changed from this in VS2008: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <DefineDebug>true</DefineDebug> <DefineTrace>true</DefineTrace> <OutputPath>bin\Debug\</OutputPath> <DocumentationFile>CustomerManager.xml</DocumentationFile> <WarningsAsErrors>41999,42016,42017,42018,42019,42020,42021,42022,42032,42036</WarningsAsErrors> </PropertyGroup> To this in VS2010: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <DefineDebug>true</DefineDebug> <DefineTrace>true</DefineTrace> <OutputPath>bin\Debug\</OutputPath> <DocumentationFile>CustomerManager.xml</DocumentationFile> <NoWarn>42353,42354,42355</NoWarn> <WarningsAsErrors>41999,42016,42017,42018,42019,42020,42021,42022,42032,42036</WarningsAsErrors> </PropertyGroup> The main difference, is that in the VS2010 version, the 42353,42354,42355 value has been added; Inside the IDE, this manifests itself as the following setting in the Project Properties | Compile section as: "Function returning intrinsic value type without return value" = None This isn't a problem when building code inside Visual Studio 2010, but when trying to build the code through our continuous integration scripts, it fails with the following errors: [msbuild] vbc : Command line error BC2026: warning number '42353' for the option 'nowarn' is either not configurable or not valid [msbuild] vbc : Command line error BC2026: warning number '42354' for the option 'nowarn' is either not configurable or not valid [msbuild] vbc : Command line error BC2026: warning number '42355' for the option 'nowarn' is either not configurable or not valid I couldn't find anything on Google for these messages, which is strange, as I am trying to find out why this is happening. Any suggestions as to why Visual Studio 2010's conversion wizard is doing this?

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • WMI/VBS/HTML System Information Script

    - by Methical
    Hey guys; havin' a problem with this code here; can't seem to work out whats goin' wrong with it. All other variables seem to print fine in the HTML ouput; but I get an error that relates to the cputype variable. I get the following error C:\Users\Methical\Desktop\sysinfo.vbs(235,1) Microsoft VBScript runtime error: Invalid procedure call or argument I think it has somethin' to do with this line here fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>CPU</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & cputype & "</i></td></tr>" If i delete this line; the script compiles and outputs with no errors. Here is the full code below Dim strComputer, objWMIService, propValue, objItem Dim strUserName, strPassword, colItems, SWBemlocator ' This section querries for the workstation to be scanned. UserName = "" Password = "" strComputer = "127.1.1.1" ImgDir = "C:\Scripts\images\" 'Sets up the connections and opjects to be used throughout the script. Set SWBemlocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = SWBemlocator.ConnectServer(,"root\CIMV2",strUserName,strPassword) 'This determines the current date and time of the PC being scanned. Set colItems = objWMIService.ExecQuery("SELECT * FROM Win32_LocalTime", "WQL", wbemFlagReturnImmediately + wbemFlagForwardOnly) For Each objItem in colItems If objItem.Minute < 10 Then theMinutes = "0" & objItem.Minute Else theMinutes = objItem.Minute End If If objItem.Second < 10 Then theSeconds = "0" & objItem.Second Else theSeconds = objItem.Second End If DateTime = objItem.Month & "/" & objItem.Day & "/" & objItem.Year & " - " & objItem.Hour & ":" & theMinutes & ":" & theSeconds Next 'Gets some ingomation about the Operating System including Service Pack level. Set colItems = objWMIService.ExecQuery("Select * from Win32_OperatingSystem",,48) For Each objItem in colItems WKID = objItem.CSName WKOS = objItem.Caption CSD = objItem.CSDVersion Architecture = objItem.OSArchitecture SysDir = objItem.SystemDirectory SysDrive = objItem.SystemDrive WinDir = objItem.WindowsDirectory ServicePack = objItem.ServicePackMajorVersion & "." & objItem.ServicePackMinorVersion Next 'This section returns the Time Zone Set colItems = objWMIService.ExecQuery("Select * from Win32_TimeZone") For Each objItem in colItems Zone = objItem.Description Next 'This section displays the Shadow Storage information Set colItems = objWMIService.ExecQuery("Select * from Win32_ShadowStorage") For Each objItem in colItems Allocated = int((objItem.AllocatedSpace/1024)/1024+1) UsedSpace = int((objItem.UsedSpace/1024)/1024+1) MaxSpace = int((objItem.MaxSpace/1024)/1024+1) Next 'This section returns the InstallDate of the OS Set objSWbemDateTime = _ CreateObject("WbemScripting.SWbemDateTime") Set colOperatingSystems = _ objWMIService.ExecQuery _ ("Select * from Win32_OperatingSystem") For Each objOperatingSystem _ in colOperatingSystems objSWbemDateTime.Value = _ objOperatingSystem.InstallDate InstallDate = _ objSWbemDateTime.GetVarDate(False) Next 'This section returns the Video card and current resolution. Set colItems = objWMIService.ExecQuery("Select * from Win32_DisplayConfiguration",,48) For Each objItem in colItems VideoCard = objItem.DeviceName Resolution = objItem.PelsWidth & " x " & objItem.PelsHeight & " x " & objItem.BitsPerPel & " bits" Next 'This section returns the Video card memory. Set objWMIService = GetObject("winmgmts:root\cimv2") Set colItems = objWMIService.ExecQuery ("Select * from Win32_VideoController") For Each objItem in colItems VideoMemory = objItem.AdapterRAM/1024/1024 Next 'This returns various system information including current logged on user, domain, memory, manufacture and model. Set colItems = objWMIService.ExecQuery("Select * from Win32_ComputerSystem",,48) For Each objItem in colItems UserName = objItem.UserName Domain = objItem.Domain TotalMemory = int((objItem.TotalPhysicalMemory/1024)/1024+1) Manufacturer = objItem.Manufacturer Model = objItem.Model SysType = objItem.SystemType Next 'This determines the total hard drive space and free hard drive space. Set colItems = objWMIService.ExecQuery("Select * from Win32_LogicalDisk Where Name='C:'",,48) For Each objItem in colItems FreeHDSpace = Fix(((objItem.FreeSpace/1024)/1024)/1024) TotalHDSpace = Fix(((objItem.Size/1024)/1024)/1024) Next 'This section returns the default printer and printer port. Set colItems = objWMIService.ExecQuery("SELECT * FROM Win32_Printer where Default=True", "WQL", wbemFlagReturnImmediately + wbemFlagForwardOnly) For Each objItem in colItems Printer = objItem.Name PortName = objItem.PortName Next 'This returns the CPU information. Set colItems = objWMIService.ExecQuery("SELECT * FROM Win32_Processor", "WQL", wbemFlagReturnImmediately + wbemFlagForwardOnly) For Each objItem in colItems CPUDesc = LTrim(objItem.Name) Next '// CPU Info For each objCPU in GetObject("winmgmts:{impersonationLevel=impersonate}\\" & strComputer & "\root\cimv2").InstancesOf("Win32_Processor") Select Case objCPU.Family Case 2 cputype = "Unknown" Case 11 cputype = "Pentium brand" Case 12 cputype = "Pentium Pro" Case 13 cputype = "Pentium II" Case 14 cputype = "Pentium processor with MMX technology" Case 15 cputype = "Celeron " Case 16 cputype = "Pentium II Xeon" Case 17 cputype = "Pentium III" Case 28 cputype = "AMD Athlon Processor Family" Case 29 cputype = "AMD Duron Processor" Case 30 cputype = "AMD2900 Family" Case 31 cputype = "K6-2+" Case 130 cputype = "Itanium Processor" Case 176 cputype = "Pentium III Xeon" Case 177 cputype = "Pentium III Processor with Intel SpeedStep Technology" Case 178 cputype = "Pentium 4" Case 179 cputype = "Intel Xeon" Case 181 cputype = "Intel Xeon processor MP" Case 182 cputype = "AMD AthlonXP Family" Case 183 cputype = "AMD AthlonMP Family" Case 184 cputype = "Intel Itanium 2" Case 185 cputype = "AMD Opteron? Family" End Select Next 'This returns the current uptime (time since last reboot) of the system. Set colOperatingSystems = objWMIService.ExecQuery ("Select * from Win32_OperatingSystem") For Each objOS in colOperatingSystems dtmBootup = objOS.LastBootUpTime dtmLastBootupTime = WMIDateStringToDate(dtmBootup) dtmSystemUptime = DateDiff("h", dtmLastBootUpTime, Now) Uptime = dtmSystemUptime Next Function WMIDateStringToDate(dtmBootup) WMIDateStringToDate = CDate(Mid(dtmBootup, 5, 2) & "/" & Mid(dtmBootup, 7, 2) & "/" & Left(dtmBootup, 4) & " " & Mid (dtmBootup, 9, 2) & ":" & Mid(dtmBootup, 11, 2) & ":" & Mid(dtmBootup,13, 2)) End Function dim objFSO Set objFSO = CreateObject("Scripting.FileSystemObject") ' -- The heart of the create file script ----------------------- ' -- Creates the file using the value of strFile on Line 11 ' -------------------------------------------------------------- Set fileOutput = objFSO.CreateTextFile( "x.html", true ) 'Set fileOutput = objExplorer.Document 'This is the code for the web page to be displayed. fileOutput.WriteLine "<html>" fileOutput.WriteLine " <head>" fileOutput.WriteLine " <title>System Information for '" & WKID & "' </title>" fileOutput.WriteLine " </head>" fileOutput.WriteLine " <body bgcolor='#FFFFFF' text='#000000' link='#0000FF' vlink='000099' alink='#00FF00'>" fileOutput.WriteLine " <center>" fileOutput.WriteLine " <h1>System Information for " & WKID & "</h1>" fileOutput.WriteLine " <table border='0' cellspacing='1' cellpadding='1' width='95%'>" fileOutput.WriteLine " <tr><td background='" & ImgDir & "blue_spacer.gif'>" fileOutput.WriteLine " <table border='0' cellspacing='0' cellpadding='0' width='100%'>" fileOutput.WriteLine " <tr><td>" fileOutput.WriteLine " <table border='0' cellspacing='0' cellpadding='0' width='100%'>" fileOutput.WriteLine " <tr>" fileOutput.WriteLine " <td width='5%' align='left' valign='middle' background='" & ImgDir & "blue_spacer.gif'><img src='" & ImgDir & "write.gif'></td>" fileOutput.WriteLine " <td width='95%' align='left' valign='middle' background='" & ImgDir & "blue_spacer.gif'> <font color='#FFFFFF' size='5'>WKInfo - </font><font color='#FFFFFF' size='3'>General information on the Workstation.</font></td>" fileOutput.WriteLine " </tr>" fileOutput.WriteLine " <tr><td colspan='2' bgcolor='#FFFFFF'>" fileOutput.WriteLine " <TABLE width='100%' cellspacing='0' cellpadding='2' border='1' bordercolor='#c0c0c0' bordercolordark='#ffffff' bordercolorlight='#c0c0c0'>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><h3>Date and Time</h3></b></TD></TR>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Date/Time</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & DateTime & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>System Uptime</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Uptime & " hours</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Time Zone</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Zone & " </i></td></tr>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><h3>General Computer Information</h3></b></TD></TR>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Manufacturer</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Manufacturer & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Model</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Model & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>System Based</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & SysType & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Operating System</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & WKOS & " " & CSD & " " & Architecture & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Operating System Install Date</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & InstallDate & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>UserName</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & UserName & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Workstation Name</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & WKID & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Domain</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Domain & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>System Drive</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & SysDrive & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>System Directory</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & SysDir & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Windows Directory</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & WinDir & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>ShadowStorage Allocated Space</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Allocated & " MB</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>ShadowStorage Used Space</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & UsedSpace & " MB</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>ShadowStorage Max Space</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & MaxSpace & " MB</i></td></tr>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><h3>General Hardware Information</h3></b></TD></TR>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>CPU</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & cputype & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Memory</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & TotalMemory & " MB</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Total HDD Space</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & TotalHDSpace & " GB</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Free HDD Space</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & FreeHDSpace & " GB</i></td></tr>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><h3>General Video Card Information</h3></b></TD></TR>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Video Card</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & VideoCard & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Resolution</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & Resolution & "</i></td></tr>" fileOutput.WriteLine " <TR><TD width='30%' align='left' bgcolor='#e0e0e0'>Memory</TD><td width='70%' bgcolor=#f0f0f0 align=left><i>" & VideoMemory & " MB</i></td></tr>" 'This section lists all the current services and their status. fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><h3>Current Service Information</h3></b></TD></TR>" fileOutput.WriteLine " <tr><td colspan='2' bgcolor='#f0f0f0'>" fileOutput.WriteLine " <TABLE width='100%' cellspacing='0' cellpadding='2' border='1' bordercolor='#c0c0c0' bordercolordark='#ffffff' bordercolorlight='#c0c0c0'>" fileOutput.WriteLine " <TR><TD width='70%' align='center' bgcolor='#e0e0e0'><b>Service Name</b></td><TD width='30%' align='center' bgcolor='#e0e0e0'><b>Service State</b></td><tr>" Set colRunningServices = objWMIService.ExecQuery("Select * from Win32_Service") For Each objService in colRunningServices fileOutput.WriteLine " <TR><TD align='left' bgcolor='#f0f0f0'>" & objService.DisplayName & "</TD><td bgcolor=#f0f0f0 align=center><i>" & objService.State & "</i></td></tr>" wscript.echo " <TR><TD align='left' bgcolor='#f0f0f0'>" & objService.DisplayName & "</TD><td bgcolor=#f0f0f0 align=center><i>" & objService.State & "</i></td></tr>" Next fileOutput.WriteLine " </table>" fileOutput.WriteLine " </td></tr>" 'This section lists all the current running processes and some information. fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><h3>Current Process Information</h3></b></TD></TR>" fileOutput.WriteLine " <tr><td colspan='2' bgcolor='#f0f0f0'>" fileOutput.WriteLine " <TABLE width='100%' cellspacing='0' cellpadding='2' border='1' bordercolor='#c0c0c0' bordercolordark='#ffffff' bordercolorlight='#c0c0c0'>" fileOutput.WriteLine " <TR><TD width='10%' align='center' bgcolor='#e0e0e0'><b>PID</b></td><TD width='35%' align='center' bgcolor='#e0e0e0'><b>Process Name</b></td><TD width='40%' align='center' bgcolor='#e0e0e0'><b>Owner</b></td><TD width='15%' align='center' bgcolor='#e0e0e0'><b>Memory</b></td></tr>" Set colProcessList = objWMIService.ExecQuery("Select * from Win32_Process") For Each objProcess in colProcessList colProperties = objProcess.GetOwner(strNameOfUser,strUserDomain) fileOutput.WriteLine " <TR><TD align='center' bgcolor='#f0f0f0'>" & objProcess.Handle & "</td><TD align='center' bgcolor='#f0f0f0'>" & objProcess.Name & "</td><TD align='center' bgcolor='#f0f0f0'>" & strUserDomain & "\" & strNameOfUser & "</td><TD align='center' bgcolor='#f0f0f0'>" & objProcess.WorkingSetSize/1024 & " kb</td><tr>" Next fileOutput.WriteLine " </table>" fileOutput.WriteLine " </td></tr>" 'This section lists all the currently installed software on the machine. fileOutput.WriteLine " <TR><TD align='center' bgcolor='#d0d0d0' colspan='2'><b><i>Installed Software</i></b></TD></TR>" fileOutput.WriteLine " <tr><td colspan='2' bgcolor='#f0f0f0'>" Set colSoftware = objWMIService.ExecQuery ("Select * from Win32_Product") For Each objSoftware in colSoftware fileOutput.WriteLine" <TABLE width='100%' cellspacing='0' cellpadding='2' border='1' bordercolor='#c0c0c0' bordercolordark='#ffffff' bordercolorlight='#c0c0c0'>" fileOutput.WriteLine" <tr><td width=30% align=center bgcolor='#e0e0e0'><b>Name</b></td><td width=30% align=center bgcolor='#e0e0e0'><b>Vendor</b></td><td width=30% align=center bgcolor='#e0e0e0'><b>Version</b></td></tr>" fileOutput.WriteLine" <tr><td align=center bgcolor=#f0f0f0>" & objSoftware.Name & "</td><td align=center bgcolor=#f0f0f0>" & objSoftware.Vendor & "</td><td align=center bgcolor=#f0f0f0>" & objSoftware.Version & "</td></tr>" fileOutput.WriteLine" <tr height=2><td height=10 align=center bgcolor=midnightblue colspan=3></td></tr>" fileOutput.WriteLine" </table>" Next fileOutput.WriteLine " </td></tr>" fileOutput.WriteLine " </table>" fileOutput.WriteLine " </td></tr>" fileOutput.WriteLine " </table>" fileOutput.WriteLine " </td></tr>" fileOutput.WriteLine " </table>" fileOutput.WriteLine " </td></tr>" fileOutput.WriteLine " </table>" fileOutput.WriteLine " <p><small></small></p>" fileOutput.WriteLine " </center>" fileOutput.WriteLine " </body>" fileOutput.WriteLine "<html>" fileOutput.close WScript.Quit

    Read the article

  • How to setup an hibernate project using annotations compliant with JPA2.0

    - by phmr
    I would like to setup an hibernate project, for this I use the lastest hibernate 3.5-Final. BTW my IDE is Netbeans. The problem is that each time a run the application, it seems to start from a fresh database whatever db backend I use (I tried hsqldb & sqlite): here is my persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="PMMPU" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>model.Extrait</class> <class>model.Mot</class> <class>model.Prefixe</class> <class>model.Suffixe</class> <class>model.Texte</class> <properties> <property name="hibernate.dialect" value="dialect.SQLiteDialect"/> <property name="hibernate.connection.username" value=""/> <property name="hibernate.connection.driver_class" value="org.sqlite.JDBC"/> <property name="hibernate.connection.password" value=""/> <property name="hibernate.connection.url" value="jdbc:sqlite:test.db"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.NoCacheProvider"/> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence> I tried to change hibernate.hbm2ddl.auto value. I got a HibernateUtil class which takes care of creating the emf & em: public class HibernateUtil { private static EntityManagerFactory emf = null; private static EntityManager em = null; public static EntityManagerFactory getEmf() { if(emf == null) emf = Persistence.createEntityManagerFactory("PMMPU"); return emf; } public static EntityManager getEm() { if(em == null) em = getEmf().createEntityManager(); return em; } What did a I do wrong ? edit1: further research with mysql lead me to think that the problem is related to sqlite & hsqldb interaction with hibernate 3.5

    Read the article

  • glassfish v3.0 hangs no app is ever deployed and no error is ever shown

    - by Samuel Lopez
    I have a web app that uses JSF 2.0 with richFaces and primeFaces, hibernate and java and I use NetBeans 7.1.2 as the IDE when I run the app the glassfish server is started and the log shows this: Launching GlassFish on Felix platform Información: Running GlassFish Version: GlassFish Server Open Source Edition 3.1.2 (build 23) Información: Grizzly Framework 1.9.46 started in: 20ms - bound to [0.0.0.0:4848] Información: Grizzly Framework 1.9.46 started in: 32ms - bound to [0.0.0.0:8181] Información: Grizzly Framework 1.9.46 started in: 59ms - bound to [0.0.0.0:8080] Información: Grizzly Framework 1.9.46 started in: 32ms - bound to [0.0.0.0:3700] Información: Grizzly Framework 1.9.46 started in: 21ms - bound to [0.0.0.0:7676] Información: Registered org.glassfish.ha.store.adapter.cache.ShoalBackingStoreProxy for persistence-type = replicated in BackingStoreFactoryRegistry Información: SEC1002: Security Manager is OFF. Información: SEC1010: Entering Security Startup Service Información: SEC1143: Loading policy provider com.sun.enterprise.security.provider.PolicyWrapper. Información: SEC1115: Realm [admin-realm] of classtype [com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created. Información: SEC1115: Realm [file] of classtype [com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created. Información: SEC1115: Realm [certificate] of classtype [com.sun.enterprise.security.auth.realm.certificate.CertificateRealm] successfully created. Información: SEC1011: Security Service(s) Started Successfully Información: WEB0169: Created HTTP listener [http-listener-1] on host/port [0.0.0.0:8080] Información: WEB0169: Created HTTP listener [http-listener-2] on host/port [0.0.0.0:8181] Información: WEB0169: Created HTTP listener [admin-listener] on host/port [0.0.0.0:4848] Información: WEB0171: Created virtual server [server] Información: WEB0171: Created virtual server [__asadmin] Información: WEB0172: Virtual server [server] loaded default web module [] Información: Inicializando Mojarra 2.1.6 (SNAPSHOT 20111206) para el contexto '/test' Información: Hibernate Validator 4.2.0.Final Información: WEB0671: Loading application [test] at [/test] Información: CORE10010: Loading application test done in 4,885 ms Información: GlassFish Server Open Source Edition 3.1.2 (23) startup time : Felix (1,848ms), startup services(5,600ms), total(7,448ms) Información: JMX005: JMXStartupService had Started JMXConnector on JMXService URL service:jmx:rmi://SJ007:8686/jndi/rmi://SJ007:8686/jmxrmi Información: WEB0169: Created HTTP listener [http-listener-1] on host/port [0.0.0.0:8080] Información: Grizzly Framework 1.9.46 started in: 14ms - bound to [0.0.0.0:8080] Información: WEB0169: Created HTTP listener [http-listener-2] on host/port [0.0.0.0:8181] Información: Grizzly Framework 1.9.46 started in: 12ms - bound to [0.0.0.0:8181] but right there it hangs and the deploy bar keeps running but no more actions are shown, nothing else is logged either it just stays there until I stop the deploy Is there any other error log to debug glassfish server? Any thoughts? I have re installed glassfish and NetBeans but it all seems the same. I think this started happening after I had to force-restart my computer with NetBeans stil open and the app deployed, but it's hard to know for sure if this was the real catalyst. Any thoughts or help is appreciated thanks. Is it an app error? if so why no errors in the log are shown?

    Read the article

  • Getting XText to work

    - by Calmarius
    I know you don't like helping others in their homework but I have to make an XText grammar, write a sample code that matches this grammar and compile it to a html file. The lecturer showed us the steps and everything worked for him... He said "It's so simple it will be a 10 minute work for you". And I believed that. However at home almost nothing works as expected. And of course no more lectures to go only the exam avaits me where I have to show what I done to pass. Moreover the e-mail I sent him bounced back by the mailer-demon... I got Xtext along with Eclipse IDE from the xtext website and I unpacked it and I followed the steps in the official tuturial to get the default project template to work. The tutorial is found here: http://wiki.eclipse.org/Xtext/GettingStarted Now I'm at the step "Model". It says open the "MyModel.mydsl" I do that but the editor does not opened. It said: "Could not open the editor: The editor class could not be instantiated. This usually indicates a missing no-arg constructor or that the editor's class name was mistyped in plugin.xml." Since everything is generated, the error message does not helped me... There was an option to look at the stack trace (it was mile long) and on the top of it there was an exception: java.lang.IllegalStateException: The bundle has not yet been activated. Make sure the Manifest.MF contains 'Bundle-ActivationPolicy: lazy'. I opened Manifast.MF and Bundle-ActivationPolicy: lazy was set... I googled for the solution but no avail. It drove me nuts and I gave up. I have no experience with Eclipse and Java and XText, I just want to do my homework and forget everything until I will need it again... Anyone have experience with XText? Any help appreciated. ps: I will be on it too and I might resolve the problem in several hours. But now I am at a loss.

    Read the article

  • OpenCV 2.4.2 on Matlab 2012b (Windows 7)

    - by Maik Xhani
    Hello i am trying to use OpenCV 2.4.2 in Matlab 2012b. I have tried those actions: downloaded OpenCV 2.4.2 used CMake on opencv folder using Visual Studio 10 and Visual Studio 10 Win64 compiler built Debug and Release version with Visual Studio first without any other option and then with D_SCL_SECURE=1 specified for every project changed Matlab's mexopts.bat and adding new lines refering to library and include (see bottom for mexopts.bat content) with Visual Studio 10 compiler tried to compile a simple file with a OpenCV library inclusion and all goes well. try to compile something that actually uses OpenCV commands and get errors. I used openmexopencv library and when tried to compile something i get this error cv.make mex -largeArrayDims -D_SECURE_SCL=1 -Iinclude -I"C:\OpenCV\build\include" -L"C:\OpenCV\build\x64\vc10\lib" -lopencv_calib3d242 -lopencv_contrib242 -lopencv_core242 -lopencv_features2d242 -lopencv_flann242 -lopencv_gpu242 -lopencv_haartraining_engine -lopencv_highgui242 -lopencv_imgproc242 -lopencv_legacy242 -lopencv_ml242 -lopencv_nonfree242 -lopencv_objdetect242 -lopencv_photo242 -lopencv_stitching242 -lopencv_ts242 -lopencv_video242 -lopencv_videostab242 src+cv\CamShift.cpp lib\MxArray.obj -output +cv\CamShift CamShift.cpp C:\Program Files\MATLAB\R2012b\extern\include\tmwtypes.h(821) : warning C4091: 'typedef ': ignorato a sinistra di 'wchar_t' quando non si dichiara alcuna variabile c:\program files\matlab\r2012b\extern\include\matrix.h(319) : error C4430: identificatore di tipo mancante, verr… utilizzato int. Nota: default-int non Š pi— supportato in C++ the content of my mexopts.bat is @echo off rem MSVC100OPTS.BAT rem rem Compile and link options used for building MEX-files rem using the Microsoft Visual C++ compiler version 10.0 rem rem $Revision: 1.1.6.4.2.1 $ $Date: 2012/07/12 13:53:59 $ rem Copyright 2007-2009 The MathWorks, Inc. rem rem StorageVersion: 1.0 rem C++keyFileName: MSVC100OPTS.BAT rem C++keyName: Microsoft Visual C++ 2010 rem C++keyManufacturer: Microsoft rem C++keyVersion: 10.0 rem C++keyLanguage: C++ rem C++keyLinkerName: Microsoft Visual C++ 2010 rem C++keyLinkerVersion: 10.0 rem rem ******************************************************************** rem General parameters rem ******************************************************************** set MATLAB=%MATLAB% set VSINSTALLDIR=c:\Program Files (x86)\Microsoft Visual Studio 10.0 set VCINSTALLDIR=%VSINSTALLDIR%\VC set OPENCVDIR=C:\OpenCV rem In this case, LINKERDIR is being used to specify the location of the SDK set LINKERDIR=c:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\ set PATH=%VCINSTALLDIR%\bin\amd64;%VCINSTALLDIR%\bin;%VCINSTALLDIR%\VCPackages;%VSINSTALLDIR%\Common7\IDE;%VSINSTALLDIR%\Common7\Tools;%LINKERDIR%\bin\x64;%LINKERDIR%\bin;%MATLAB_BIN%;%PATH% set INCLUDE=%OPENCVDIR%\build\include;%VCINSTALLDIR%\INCLUDE;%VCINSTALLDIR%\ATLMFC\INCLUDE;%LINKERDIR%\include;%INCLUDE% set LIB=%OPENCVDIR%\build\x64\vc10\lib;%VCINSTALLDIR%\LIB\amd64;%VCINSTALLDIR%\ATLMFC\LIB\amd64;%LINKERDIR%\lib\x64;%MATLAB%\extern\lib\win64;%LIB% set MW_TARGET_ARCH=win64 rem ******************************************************************** rem Compiler parameters rem ******************************************************************** set COMPILER=cl set COMPFLAGS=/c /GR /W3 /EHs /D_CRT_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_DEPRECATE /D_SECURE_SCL=0 /DMATLAB_MEX_FILE /nologo /MD set OPTIMFLAGS=/O2 /Oy- /DNDEBUG set DEBUGFLAGS=/Z7 set NAME_OBJECT=/Fo rem ******************************************************************** rem Linker parameters rem ******************************************************************** set LIBLOC=%MATLAB%\extern\lib\win64\microsoft set LINKER=link set LINKFLAGS=/dll /export:%ENTRYPOINT% /LIBPATH:"%LIBLOC%" libmx.lib libmex.lib libmat.lib /MACHINE:X64 kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib opencv_calib3d242.lib opencv_contrib242.lib opencv_core242.lib opencv_features2d242.lib opencv_flann242.lib opencv_gpu242.lib opencv_haartraining_engine.lib opencv_imgproc242.lib opencv_highgui242.lib opencv_legacy242.lib opencv_ml242.lib opencv_nonfree242.lib opencv_objdetect242.lib opencv_photo242.lib opencv_stitching242.lib opencv_ts242.lib opencv_video242.lib opencv_videostab242.lib /nologo /manifest /incremental:NO /implib:"%LIB_NAME%.x" /MAP:"%OUTDIR%%MEX_NAME%%MEX_EXT%.map" set LINKOPTIMFLAGS= set LINKDEBUGFLAGS=/debug /PDB:"%OUTDIR%%MEX_NAME%%MEX_EXT%.pdb" set LINK_FILE= set LINK_LIB= set NAME_OUTPUT=/out:"%OUTDIR%%MEX_NAME%%MEX_EXT%" set RSP_FILE_INDICATOR=@ rem ******************************************************************** rem Resource compiler parameters rem ******************************************************************** set RC_COMPILER=rc /fo "%OUTDIR%mexversion.res" set RC_LINKER= set POSTLINK_CMDS=del "%LIB_NAME%.x" "%LIB_NAME%.exp" set POSTLINK_CMDS1=mt -outputresource:"%OUTDIR%%MEX_NAME%%MEX_EXT%;2" -manifest "%OUTDIR%%MEX_NAME%%MEX_EXT%.manifest" set POSTLINK_CMDS2=del "%OUTDIR%%MEX_NAME%%MEX_EXT%.manifest" set POSTLINK_CMDS3=del "%OUTDIR%%MEX_NAME%%MEX_EXT%.map"

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >