Search Results

Search found 24508 results on 981 pages for 'joel test'.

Page 557/981 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • I can't uninstall Git

    - by Tom
    I was researching Git so I downloaded the Windows version to test it out on a repository on GitHub. After about 30 minutes I couldn't work out how to use it, so I decided I probably wouldn't need a distributed repository as our projects aren't that big and went back to what I know - SVN. (I thought) I uninstalled all the Git related stuff I'd put on my PC, but have now got an irritating problem where if I open any folders I get an error message saying: Hello [ERROR] Could not find git path As you can imagine, this is a real pain, does anyone have any ideas on how to fix it?

    Read the article

  • How to install OpenOffice 3.1 in Headless Mode?

    - by Geo
    I need to setup OpenOffice in a linux box that will never have X installed. Every time I run the setup program of the OpenOffice installer it complains that the system does not have an X Terminal. I am using OOo_3.1.0_LinuxIntel_install_wJRE_en-US.tar. I have done this headless install for version 2.4, but we are having some performance issues and we will like to test the 3.1 version. The rpm -i *rpm does not work, since it is also looking for libgnome inside it. We are trying to install the system in a CenOS 5.1. Any ideas are welcome?

    Read the article

  • Exim4 delivery to parent domain.

    - by bruor
    I've set up an ubuntu server 9.1 system with exim4 as a relay for e-mail on my network. I've told exim that it is part of a subdomain: sub.domain.com I can test and deliver messages fine to my gmail accounts. I cannot get exim4 to sent messages to [email protected] though. The error received in the logs shows that exim thinks it should be delivering messages for domain.com to localhost instead of the actual MX for domain.com Is there an easy way to modify the debconf update-exim4.conf.conf so that is has the relay_to_domains capability?

    Read the article

  • Modular programming is the method of programming small task or programs

    Modular programming is the method of programming small task or sub-programs that can be arranged in multiple variations to perform desired results. This methodology is great for preventing errors due to the fact that each task executes a specific process and can be debugged individually or within a larger program when combined with other tasks or sub programs. C# is a great example of how to implement modular programming because it allows for functions, methods, classes and objects to be use to create smaller sub programs. A program can be built from smaller pieces of code which saves development time and reduces the chance of errors because it is easier to test a small class or function for a simple solutions compared to testing a full program which has layers and layers of small programs working together.Yes, it is possible to write the same program using modular and non modular programming, but it is not recommend it. When you deal with non modular programs, they tend to contain a lot of spaghetti code which can be a pain to develop and not to mention debug especially if you did not write the code. In addition, in my experience they seem to have a lot more hidden bugs which waste debugging and development time. Modular programming methodology in comparision to non-mondular should be used when ever possible due to the use of small components. These small components allow business logic to be reused and is easier to maintain. From the user’s view point, they cannot really tell if the code is modular or not with today’s computers.

    Read the article

  • Can't shrink Windows Boot NTFS disk: ERROR(5): Could not map attribute 0x80 in inode, Input/output error

    - by arcyqwerty
    Ubuntu 12.04 LTS, all updates current as of 7/3/2012 gksudo gparted Shrink /dev/sda2 from 367GB to 307GB GParted 0.11.0 --enable-libparted-dmraid Libparted 2.3 Shrink /dev/sda2 from 367.00 GiB to 307.00 GiB 00:32:57 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 20,484,096 end: 790,142,975 size: 769,658,880 (367.00 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:53 ( SUCCESS ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) Checking for bad sectors ... Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Estimating smallest shrunken size supported ... File feature Last used at By inode $MFT : 389998 MB 0 Multi-Record : 394061 MB 386464 $MFTMirr : 314823 MB 1 Compressed : 394064 MB 1019521 Sparse : 330887 MB 752454 Ordinary : 393297 MB 706060 You might resize at 327949758464 bytes or 327950 MB (freeing 66116 MB). Please make a test run using both the -n and -s options before real resizing! shrink file system 00:32:04 ( ERROR ) run simulation 00:32:04 ( ERROR ) ntfsresize -P --force --force /dev/sda2 -s 329640837119 --no-action ntfsresize v2012.1.15AR.1 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 394065338880 bytes (394066 MB) Current device size: 394065346560 bytes (394066 MB) New volume size : 329640829440 bytes (329641 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 327950 MB (83.2%) Collecting resizing constraints ... Needed relocations : 13300525 (54479 MB) Schedule chkdsk for NTFS consistency check at Windows boot time ... Resetting $LogFile ... (this might take a while) Relocating needed data ... Updating $BadClust file ... Updating $Bitmap file ... ERROR(5): Could not map attribute 0x80 in inode 1667593: Input/output error ======================================== Windows has run chkdsk successfully (on boot) several times now

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • What Hypervisors support non-homogenous clusters?

    - by edude05
    I've been using Citrx Xenserver for awhile on a few machines that don't support Hardware Virtualization as a test for various small servers. I recently have been experimenting with moving the PV Vms between machines but Xenserver gives me errors that roughly say I need to have homogenous hardware for this to work. Because of this I haven't been able to setup XenMotion or any of the nice features that come with server pooling in Xenserver. I'm considering moving away from XenServer, however I can't seem to find a Hypervisor that explicitly supports non-homogenous clusters. On a side note, we do have a few idenitally configured Dell 1950s that haven't had any VM solution setup on yet, so if we can find a solution that can allow us to move PVs to those as well that would be great. Non free solutions are OK as well. What hypervisor will allow this? Thanks!

    Read the article

  • Where is the SQL Azure Development Environment

    - by BuckWoody
    Recently I posted an entry explaining that you can develop in Windows Azure without having to connect to the main service on the Internet, using the Software Development Kit (SDK) which installs two emulators - one for compute and the other for storage. That brought up the question of the same kind of thing for SQL Azure. The short answer is that there isn’t one. While we’ll make the development experience for all versions of SQL Server, including SQL Azure more easy to write against, you can simply treat it as another edition of SQL Server. For instance, many of us use the SQL Server Developer Edition - which in versions up to 2008 is actually the Enterprise Edition - to develop our code. We might write that code against all kinds of environments, from SQL Express through Enterprise Edition. We know which features work on a certain edition, what T-SQL it supports and so on, and develop accordingly. We then test on the actual platform to ensure the code runs as expected. You can simply fold SQL Azure into that same development process. When you’re ready to deploy, if you’re using SQL Server Management Studio 2008 R2 or higher, you can script out the database when you’re done as a SQL Azure script (with change notifications where needed) by selecting the right “Engine Type” on the scripting panel: (Thanks to David Robinson for pointing this out and my co-worker Rick Shahid for the screen-shot - saved me firing up a VM this morning!) Will all this change? Will SSMS, “Data Dude” and other tools change to include SQL Azure? Well, I don’t have a specific roadmap for those tools, but we’re making big investments on Windows Azure and SQL Azure, so I can say that as time goes on, it will get easier. For now, make sure you know what features are and are not included in SQL Azure, and what T-SQL is supported. Here are a couple of references to help: General Guidelines and Limitations: http://msdn.microsoft.com/en-us/library/ee336245.aspx Transact-SQL Supported by SQL Azure: http://msdn.microsoft.com/en-us/library/ee336250.aspx SQL Azure Learning Plan: http://blogs.msdn.com/b/buckwoody/archive/2010/12/13/windows-azure-learning-plan-sql-azure.aspx

    Read the article

  • Improving the performance of a db import process

    - by mmr
    I have a program in Microsoft Access that processes text and also inserts data in MySQL database. This operation takes 30 mins or less to finished. I translated it into VB.NET and it takes 2 hours to finish. The program goes like this: A text file contains individual swipe from a corresponding person, it contains their id, time and date of swipe in the machine, and an indicator if it is a time-in or a time-out. I process this text, segregate the information and insert the time-in and time-out per row. I also check if there are double occurrences in the database. After checking, I simply merge the time-in and time-out of the corresponding person into one row only. This process takes 2 hours to finished in VB.NET considering I have a table to compare which contains 600,000+ rows. Now, I read in the internet that python is best in text processing, i already have a test but i doubt in database operation. What do you think is the best programming language for this kind of problem? How can I speed up the process? My first idea was using python instead of VB.NET, but since people here telling me here on SO that this most probably won't help I am searching for different solutions.

    Read the article

  • TechEd 2012: Office, SharePoint And More Office

    - by Tim Murphy
    I haven’t spent any time looking at Office 365 up to this point.  I met Donovan Follette on the flight down from Chicago.  I also got to spend some time discussing the product offerings with him at the TechExpo and that sealed my decision to attend this session. The main actor of his presentation is the BCS – Business Connectivity Services. He explained that while this feature has existed in on-site SharePoint it is a valuable new addition to Office 365 SharePoint Online.  If you aren’t familiar with the BCS, it allows you to leverage non-SharePoint enterprise data source from SharePoint.  The greatest benefactor is the end users who can leverage the data using a variety of Office products and more.  The one thing I haven’t shaken my skepticism of is the use of SharePoint Designer which Donovan used to create a WCF service.  It is mostly my tendency to try to create solutions that can be managed through the whole application life cycle.  It the past migrating through test environments has been near impossible with anything other than content created by SharePiont Designer. There is a lot of end user power here.  The biggest consideration I think you need to examine when reaching from you enterprise LOB data stores out to an online service and back is that you are going to take a performance hit.  This means that you have to be very aware of how you configure these integrated self serve solutions.  As a rule make sure you are using the right tool for the right situation. I appreciated that he showed both no code and code solutions for the consumer of the LOB data.  I came out of this session much better informed about the possibilities around this product. del.icio.us Tags: Office 365,SharePoint Online,TechEd,TechEd 2012

    Read the article

  • Game Changer Appliance for SMBs Powered by Oracle Linux

    - by Zeynep Koch
    In the November 28th CRN article  Review: Thumbs-Up On Oracle Database Appliance  , Edward F. Moltzen mentions that "The Test Center likes this appliance (Oracle Database Appliance) , for the performance and for the strong security offered by the underlying Oracle Linux in the box. It’s more than a solid offering for the SMB space; it’s potentially a game-changer as data and security needs race to keep up with the oncoming generations of technology." The Oracle Database Appliance is a new way to take advantage of the world's most popular database—Oracle Database 11g—in a single, easy-to-deploy and manage system. It's a complete package of software, server, storage, and network that's engineered for simplicity; saving time and money by simplifying deployment, maintenance, and support of database workloads. All hardware and software components are supported by a single vendor—Oracle—and offer customers unique pay-as-you-grow software licensing to quickly scale from 2 processor cores to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades. It is: Simple—Complete plug-and-go hardware and software Reliable—Advanced management features and single-vendor support Affordable—Pay-as-you-grow platform for small database consolidation The Oracle Database Appliance is a 4U rack-mountable system pre-installed with Oracle Linux and Oracle appliance manager software. Redundancy is built into all components and the Oracle appliance manager software reduces the risk and complexity of deploying highly available databases. It's perfect for consolidating OLTP and data warehousing databases up to 4 terabytes in size, making it ideal for midsize companies or departmental systems. Read more about Oracle's Database Appliance  Read more about Oracle Linux

    Read the article

  • Trace flags - TF 1117

    - by Damian
    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wroclaw at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too. Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it. Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%: USE [master] GO CREATE DATABASE [TF1117]  ON  PRIMARY ( NAME = N'TF1117',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB ), ( NAME = N'TF1117_1',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB )  LOG ON ( NAME = N'TF1117_log',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,      SIZE = 1024KB ,      MAXSIZE = 2048GB ,      FILEGROWTH = 10% ) GO Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it. Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes: create table TF1117Tab (      col1 int,      col2 int,      col3 char (8000) ) go Now I load some data to the table to make sure that one of the data file must grow: declare @i int select @i = 1 while (@i < 800) begin       insert into TF1117Tab  values (@i, @i+1000, 'hello')        select @i= @i + 1 end I can check the actual file size in the sys.database_files DMV: SELECT name, (size*8)/1024 'Size in MB' FROM sys.database_files  GO   As you can see only the first data file was  expanded and the other has still the initial size:   name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              4 There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:     DECLARE @path NVARCHAR(260); SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),          CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc' FROM    sys.traces WHERE   is_default = 1; SELECT    DatabaseName,                 [FileName],                 SPID,                 Duration,                 StartTime,                 EndTime,                 FileType =                         CASE EventClass                                     WHEN 92 THEN 'Data'                                    WHEN 93 THEN 'Log'             END FROM sys.fn_trace_gettable(@path, DEFAULT) WHERE   EventClass IN (92,93) AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117' ORDER BY   StartTime DESC;   After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.    Now it’s time to turn on the flag 1117. DBCC TRACEON(1117)   I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded: name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              5 I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J  

    Read the article

  • Can I provision half a core as a virtual CPU?

    - by ramdaz
    I am virtualization newbie. Please advise on these questions. Please note using a commercial VM software like Citrix or VMware is not a choice for me. I have at my disposal a couple of 2x 4 core servers with 32 GB RAM. I need to create 16 VMs on each server to test some web applications. Can I provision half a core as a virtual CPU for each VM? To my best knowledge I can't do so on Xen. Is it possible on KVM or some other free open source VM solution? If it's not possible to assign half a core, how do I ensure that uniform processing power is available for all VMs? Since the job is to create separate instances for hosting 16 web apps in a physical server, do you recommend setting up a private cloud using Ubuntu Enterprise Cloud as a better option? Is there HA solution under KVM, like Remus for Xen?

    Read the article

  • DFS replication initial step problem

    - by vn
    Heya, I just setup DFS on my network and it's working fine, and now I'm trying to setup DFS-R on a test folder, but then at the end of the procedure (all went fine, selected my 2 folders, primary folder, replication topology and such) I get this error message (roughly translated from french) : Unable to define security on the replicated folder. The shared administration folder doesn't exist. I'm also wondering if there's any required security on the folders to replicate so that DFS-R can access it. I was trying to add SYSTEM in the security, but it won't find it/allow me. The folder has many many files and folders on the primary DFS pointer, but none on the 2nd, just created it with quite the same rights. Note that the primary DFS pointer is on a 2008 server and the DFS service and the secondary DFS pointer are on a 2008r2. Any help is very appreciated, thanks.

    Read the article

  • CodePlex Daily Summary for Tuesday, May 11, 2010

    CodePlex Daily Summary for Tuesday, May 11, 2010New ProjectsASP.NET MVC Extensions: ASP.NET MVC Extensions is developed on top of ASP.NET MVC extensibility point, which allows your IoC Container to rule everywhere.Best practices in .NET: NMA is a collection of knowledges that I learned from my co-worker and Internet. It's built on Domain Driven Design theories. I used Struture Map,...BioRider: Project participant of the 1st National Award for InteroperabilityBSoft: that's the project for bsoftClosedXML - The easy way to OpenXML: ClosedXML makes it easier for developers to create OpenXML files for Excel 2007. It provides a nice object oriented way to manipulate the files (si...Dragon Master: A tool for all D&D masters that need to create Dragon NPCsFacturator - Create invoices easy and fast: Windows forms application for creating invoices based on a Word template. Including a simple workflow for send, payed and finished invoices and a q...FreeEPG: An Australian EPG using the Freeview online guide. All Freeview regions are supported and data can be exported in either XMLTV or Microsoft Media ...GreedyRSS: Convert everything to RSS FeedHByte: In honor of the heisenberg uncertainty principle. This is an implementation of a type who has byte semantics, but who's value and location can not...JSON RPC 2.0 - Javascript/.NET Implementation: JSON RPC 2.0 - Javascript/.NET Implementation - NOT READY YETMjollnir - Supplemental Library for BCL: Mjollnir is Supplements Library for BCL. Mjollnir is Compatible with .NET Framework 4 (or maybe later).Money Spinner: MoneySpinnerMSPY 2010 Open Extended Dictionary Building Tool: A tool to create Open Extend Dictionary for Microsoft Pinyin IME 2010, it is develped in C#Sharepoint Data Store: This is a small library that lets SharePoint developers store and manage custom application information in SharePoint.SharePointSlim: We are going to use it in conjunction with the PowerSlim projectSSTA: A Tool to Compare SQL Database Schema Versionstbeasy: tbeasyNew Releases8085 Microprocessor simulator: 8085 Instruction Set Simulator with source code: 8085 Instruction set simulator with windows installer plus complete source code and examplesAutoArchive: Site Template...: Slightly off topic, but take a looK!BioRider: Uptiva Dreams IT Entry: ==================================== National Interoperability Award ==================================== TEAM: Uptiva Dreams IT PROJECT: BioRide...C# Developer Utility Library: ScrimpNet.Core Library May 2010: Initial upload of project library. Contains only source files. Recommend adding extracted files to your project as a project reference.Coot: Beta 1: To install the screen saver: Extract the contents of the zip file (all three files) to C:\Windows\System32 Go to screen saver properties, select ...Deploy Workflow Manager: Deploy Workflow Manager v1: Recommend you test on your development environment first before implementing into production. Criteria to run the workflow is assumed to be inclu...Expression Encoder 3 Visual Basic Samples: Encoder 3 VB Samples: Zip file contains the Encoder 3 samples written in Visual Basic.FreeEPG: Debug Release: Initial Release. Run the application from the command line for options. If you choose to run the client as a Windows Service, then you will need t...FSharpChess: Alpha Release 0.197: This is just the latest version of the binaries. Note that there are two excecutables: FSharpChess includes a UI partially written in C# FSharpCh...GreedyRSS: GreedyRSS V2: V2与V1相比: 改进了插件架构 用数据库取代XML配置文件 用Web Services暴露了部分功能Headspring Labs: ASP.NET MVC 2 tips and tricks (code): Contains example app that demonstrates techniques in the Tips & Tricks powerpointHeadspring Labs: ASP.NET MVC tips and tricks PPT: Powerpoint file for the ASP.NET MVC tips and tricks talk.HouseFly controls: HouseFly controls beta 1.0.0.0: HouseFly controls release 1.0.0.0 betaiTuner - The iTunes Companion: iTuner 1.2.3782: This production release of iTuner 1.2 allows you to synchronize one or more iTunes playlists to a USB MP3 player. It also provides the ability to ...JSON RPC 2.0 - Javascript/.NET Implementation: v0.7: Protocol implemented. Most of the extra features implemented.MapWindow6: MapWindow 6.0 msi May 10, 2010: This version fixes a reproject bug where false_easting from .prj files was not being correctly converted into meters when the projection was in feet.Mouse Zoom - Visual Studio Extension: MouseZoom 1.7: Version 1.7 fixes a bug that can cause the keys to stick when leaving focus (e.g. opening a VS dialog box).Mouse Zoom - Visual Studio Extension: MouseZoom 1.8: Override mouse wheel scroll functionality to always scroll 25% no matter what zoom level you are on (by default, scrolling with the mouse wheel bec...MSPY 2010 Open Extended Dictionary Building Tool: 20100511build: First release. 1. Installation After you download it to your local disk, create a new folder and unzip it to the new folder, that's it. 2. run ...MSPY 2010 Open Extended Dictionary Building Tool: 20100511build2: First release. 1. Installation After you download it to your local disk, create a new folder and unzip it to the new folder, that's it. 2. run ...Multiwfn: multiwfn1.3.2: multiwfn1.3.2NASA Space Shuttle TV Schedule Transfer to Outlook Calendar: NASA Space Shuttle TV Schedule Release v1.4.132.1: Warning!There is a problem with the latest version of the program and rev 0 of STS-132 mission schedule. I am looking into the problem when the yea...NASA Space Shuttle TV Schedule Transfer to Outlook Calendar: NASA Space Shuttle TV Schedule Release v1.4.132.2: This release fixes a problem with rev 0 of the STS-132 mission schedule in recognizing the year of launch. NASA changed the year of launch to a fo...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.8: Switched compliation options to always run in 32-bit mode, to ensure it can connect to MSAccess & MSExcel on 64-bit machines. Updated parameterised...Open NFSe: OpenNFSe-Salvador v1.0.0: Atualização do OpenNFSe-Salvador para o novo schema utilizado pela prefeitura de Salvador.OpenSLIM: OpenSLIM-v373b0-20100509-0: Here is the list of new additions and improvements that this new major release incorporates: Systems Decommissioning Management. Improvements on...Pcap.Net: Pcap.Net 0.6.0 (44468): Pcap.Net - May 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes ...PowerShell Community Extensions: 2.0 Production: PowerShell Community Extensions 2.0 Release NotesMay 10, 2010 The primary purpose of the Pscx 2.0 release is to convert from the previous approach...Scrum Sprint Monitor: v1.0.0.47921 (.NET 4-TFS 2010): What is new in this release? Minor version over 1.0.0.47911 introducing CEIP (Customer Experience Improvement Program). This is taking advantage of...Sharepoint Data Store: 1.0: First ReleaseSPVisualDev - SharePoint Developer Tool: Version 2.2.0: Visual Studio 2010 is now supported. Note that this is only intended to be used for MOSS 2007 / WSS 3.0 development and not for SP 2010. Package SP...SQL Server PowerShell Extensions: 2.2.1 Production: Release 2.2 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 9 modules with 133 advanced functions, 2 cmdlets and 7 scri...VCC: Latest build, v2.1.30510.0: Automatic drop of latest buildVCC: Latest build, v2.1.30510.1: Automatic drop of latest buildVCC: Latest build, v2.1.30510.2: Automatic drop of latest buildVolumeMaster: Volume Master 2.0 Beta: First release of VolumeMaster. So if you're running Windows Vista / 7 you can have a handy Volume OSD and control your volume with the following sh...WabbitStudio Z80 Software Tools: SPASM2 32-Bit: A test release for SPASM2.Web Camera Shooter: 1.0.0.1: Video capturing: Touchless SDK -> AForge.Video. Main window shown in taskbar and not top most. Native images generated for all assemblies. Sm...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETPHPExcelMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemThe Information Literacy Education Learning Environment (ILE)RawrCaliburn: An Application Framework for WPF and SilverlightwhiteBlogEngine.NETTweetSharpjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite Filter

    Read the article

  • What, if anything, to do about bow-shaped burndowns?

    - by Karl Bielefeldt
    I've started to notice a recurring pattern to our team's burndown charts, which I call a "bowstring" pattern. The ideal line is the "string" and the actual line starts out relatively flat, then curves down to meet the target like a bow. My theory on why they look like this is that toward the beginning of the story, we are doing a lot of debugging or exploratory work that is difficult to estimate remaining work for. Sometimes it even goes up a little as we discover a task is more difficult once we get into it. Then we get into implementation and test which is more predictable, hence the curving down graph. Note I'm not talking about a big scale like BDUF, just the natural short-term constraint that you have to find the bug before you can fix it, coupled with the fact that stories are most likely to start toward the beginning of a two-week iteration. Is this a common occurrence among scrum teams? Do people see it as a problem? If so, what is the root cause and some techniques to deal with it?

    Read the article

  • How to separate Hyper-V Private network from the External network

    - by Ron Ratzlaff
    I am setting up a virtual test lab and I configured a domain controller VM running Windows 2008 R2 on my Hyper-V 2008 R2 server. I needed to download and install updates on it so I added an External NIC adapter and got that done. However, systems on my actual real physical domain were pulling IPs from this server and that was a big oopsy on my part so I immediately removed the External NIC adapter until I could find out how to go about keeping the Private and the External separate. If someone from the Server Fault community can help with this since I am pretty new to this, I would be very grateful. Thanks everyone.

    Read the article

  • BDD (Behavior-Driven Development) tools for .Net

    - by tikrimi
    For several years, I use TDD (Test-Driven Development) to produce code. I no longer plans to work without using TDD. The use of TDD significantly increases code quality, but does not guarantee that the code is the code that corresponds to the requirements specifications (write the "right code" with BDD as opposed to the write "code right" with BDD). Dan North has described in an article in published in 2006 the foundations of the BDD (Behavior-Driven Development). In this article, he introduces the formalism "When Given Then". This formalism is used in all tools dedicated to BDD. This is a short list of open source BDD tools that you can use with .Net : SpecFlow: Here you can find an article in MSDN Magazine and 2 webcasts (http://channel9.msdn.com/posts/ASPNET-MVC-With-Community-Tools-Part-2-Spec-Flow-and-WatiN and http://channel9.msdn.com/posts/ASPNET-MVC-With-Community-Tools-Part-3-More-Spec-Flow-and-WatiN) published on Chanel9. NSpec: This is certainly the most used project. There are many examples on the web. StoryQ: This project is hosted on Codeplex. This small project is very simple to implement and very useful.

    Read the article

  • Thinktecture.IdentityModel: WRAP and SWT Support

    - by Your DisplayName here!
    The latest drop of Thinktecture.IdentityModel contains some helpers for the Web Resource Authorization Protocol (WRAP) and Simple Web Tokens (SWT). WRAP The WrapClient class is a helper to request SWT tokens via WRAP. It supports issuer/key, SWT and SAML input credentials, e.g.: var client = new WrapClient(wrapEp); var swt = client.Issue(issuerName, issuerKey, scope); All Issue overrides return a SimpleWebToken type, which brings me to the next helper class. SWT The SimpleWebToken class wraps a SWT token. It combines a number of features: conversion between string format and CLR type representation creation of SWT tokens validation of SWT token projection of SWT token as IClaimsIdentity helpers to embed SWT token in headers and query strings The following sample code generates a SWT token using the helper class: private static string CreateSwtToken() {     var signingKey = "wA…";     var audience = "http://websample";     var issuer = "http://self";       var token = new SimpleWebToken(       issuer, audience, Convert.FromBase64String(signingKey));     token.AddClaim(ClaimTypes.Name, "dominick");     token.AddClaim(ClaimTypes.Role, "Users");     token.AddClaim(ClaimTypes.Role, "Administrators");     token.AddClaim("simple", "test");       return token.ToString(); }

    Read the article

  • Disable Memory Modules In BIOS for Testing Purposes (Optimize Nehalem/Gulftown Memory Performance)

    - by Bob
    I recently acquired an HP Z800 with two Intel Xeon X5650 (Gulftown) 6 core processors. The person that configured the system chose 16GB (8 x 2GB DDR3-1333). I'm assuming this person was unaware these processors have 3 memory channels and to optimize memory performance one should choose memory in multiples of three. Based on this information, I have a question: By entering the BIOS, can I disable the bank on each processor that has the single memory module? If so, will this have any adverse effects or behave differently than physically removing the modules? I ask due to the fact that I prefer to store the extra memory in the system if it truly behaves as if the memory is not even there. Also, I see this as an opportunity to test 12GB vs. 16GB to see if there is a noticeable difference. Note: According to http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon, the current configuration reduces the overall data transfer speed to 1066 and in addition, the memory bandwidth goes down by about 23%.

    Read the article

  • Capistrano deploying to different servers with different authentication methods

    - by marimaf
    I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS) I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this: ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")] ssh_options[:forward_agent] = true I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case. I got to this tutorial, but couldn't find what I need. I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository Please any help os welcome. Thanks a lot

    Read the article

  • DB2 UDF Permissions

    - by WernerCD
    I have a custom function that I'm working on... the problem I'm having is simple: Permssions. example function: drop function circle_area go CREATE FUNCTION circle_area (radius FLOAT) RETURNS FLOAT LANGUAGE SQL BEGIN DECLARE pi FLOAT DEFAULT 3.14; DECLARE area FLOAT; SET area = pi * radius * radius; RETURN area; END GO if I then log out of my "admin" account... and log into test account I get a "Not authorized" error when I try to run something "Select circle_area(foo) from library.bar". I can log into iSeries Navigator, navigate to schema functions permissions and change the permission for public from Exclude to All. bam it works. How do I grant permission to all, either in the CREATE FUNCTION or after?

    Read the article

  • Is It Possible To Recover A Partial LVM Logical Volume?

    - by Terry Wang
    Background It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead: ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home) weblogic.vdi - /dev/sdb (mounted on /bea for weblogic and other stuff) btrfs1.vdi - /dev/sdc (part of btrfs -m raid1 -d raid1 configuration) btrfs2.vdi - /dev/sdd (part of btrfs -m raid1 -d raid1 configuration) more.vdi - /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-) What happened? Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late. What I have tried Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes. Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc I tried to use system rescue cd to boot and activate the volume groups, I got: Couldn't find device with uuid xxxx. Refusing activation of the partial LV root. Use --partial to override. 1 logical volume(s) in volume group "ubuntu" now active. I was able to mount home LV but not root LV. I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root - /) was striped to more.vdi (PV), I know it's almost impossible to to recover. But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-) Thanks in advance.

    Read the article

  • High CPU usage compared to WinXP. Common aps and actions use 100% CPU cycles

    - by Jopower
    I'm running Lubunto 14.04.1 LTS. PC is a 2004 HP ze4200 laptop: 1.8 ghz Celeron M with 1 gb RAM and 80 gb drive. Was running fine on WinXP SP3 and I cleaned the drive off to test Lubuntu 14 LTS. No anti-virus is installed yet. I enabled the CPU resource monitor to see how various programs drag the OS. Using Firefox 31 online right now, I see doing basic functions like openning a new tab and scrolling down a page are using 100% CPU time, ocassionally for 10-30 seconds. In fact some pretty basic aps like Leafnote hit 100% for a second. Wordpad never did that. Lubuntu Software Center locks things up at 100% for 10 seconds. Just typing here shows a 60-80% spike every character. Running the mouse around the screen for 10 seconds results in a sustained 100% load during that time. Right now, if I let the PC rest just idling Firefox and not doing anything with it, CPU use bounces from 20-40% all by itself. WinXP idles at 2-10% and it's considered not good for it to be above 20%... something odd must be happening. Sure, XP will give similar higher CPU cycles with program use, but it's not locking and slogging like this. Lubuntu is supposed to be a light OS and by memory usage it is and I'm happy since this is an old PC maxed for memory upgrades. However, being used to doing some tuning and wary of abnormalities going on in the background, the CPU use indicates things going on that I want to know about and perhaps apply a tweek or two. Recommendations are appreciated. And this 300 point "new tags" restriction bites!

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >