Search Results

Search found 28325 results on 1133 pages for 'test cases'.

Page 225/1133 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Has anyone used these database publish tools in Visual Studio 2012/2013 ?

    - by punkouter
    I am trying to figure out if they are actually useful... I have a DBProject and it would be nice if when I publish from my DEV to my TEST it what automatically copy the schema and data from DEV to TEST.. is that what this tab in visual studio is for ? On PROD I would never want to copy the data over but maybe I would change the schema and want to move that schema to PROD.. so is that what this tab is for ? For now we are using WEB DEPLOY but none of these database options.. We are manually using DATA COMPARE to sync changes on the various databases/DBproject... Any advice?

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Integration with Multiple Versions of BizTalk HL7 Accelerator Schemas

    - by Paul Petrov
    Microsoft BizTalk Accelerator for HL7 comes with multiple versions of the HL7 implementation. One of the typical integration tasks is to receive one format and transmit another. For example, system A works HL7 v2.4 messages, system B with v2.3, and system C with v2.2. The system A is exchanging messages with B and C. The logical solution is to create schemas in separate namespaces for each system and assign maps on send ports. Schematic diagram of the messaging solution is shown below:   Nothing is complex about that conceptually. On the implementation level things can get nasty though because of the elaborate nature of HL7 schemas and sheer amount of message types involved. If trying to implement maps directly in BizTalk Map Editor one would quickly get buried by thousands of links between subfields of HL7 segments. Since task is repetitive because HL7 segments are reused between message types it's natural to take advantage of such modular structure and reduce amount of work through reuse. Here's where it makes sense to switch from visual map editor to old plain XSLT. The implementation is done in three steps. First, create XSL templates to map from segments of one version to another. This can be done using BizTalk Map Editor subsequently copying and modifying generated XSL code to create one xsl:template per segment. Group all segments for format mapping in one XSL file (we call it SegmentTemplates.xsl). Here's how template for the PID segment (Patient Identification) would look like this: <xsl:template name="PID"> <PID_PatientIdentification> <xsl:if test="PID_PatientIdentification/PID_1_SetIdPatientId"> <PID_1_SetIdPid> <xsl:value-of select="PID_PatientIdentification/PID_1_SetIdPatientId/text()" /> </PID_1_SetIdPid> </xsl:if> <xsl:for-each select="PID_PatientIdentification/PID_2_PatientIdExternalId"> <PID_2_PatientId> <xsl:if test="CX_0_Id"> <CX_0_Id> <xsl:value-of select="CX_0_Id/text()" /> </CX_0_Id> </xsl:if> <xsl:if test="CX_1_CheckDigit"> <CX_1_CheckDigitSt> <xsl:value-of select="CX_1_CheckDigit/text()" /> </CX_1_CheckDigitSt> </xsl:if> <xsl:if test="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed"> <CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> <xsl:value-of select="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed/text()" /> </CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> . . . // skipped for brevity This is the most tedious and time consuming part. Templates can be created for only those segments that are used in message interchange. Once this is done the rest goes much easier. The next step is to create message type specific XSL that references (imports) segment templates XSL file. Inside this file simple call segment templates in appropriate places. For example, beginning of the mapping XSL for ADT_A01 message would look like this:   <xsl:import href="SegmentTemplates_23_to_24.xslt" />  <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />   <xsl:template match="/">    <xsl:apply-templates select="s0:ADT_A01_23_GLO_DEF" />  </xsl:template>   <xsl:template match="s0:ADT_A01_23_GLO_DEF">    <ns0:ADT_A01_24_GLO_DEF>      <xsl:call-template name="EVN" />      <xsl:call-template name="PID" />      <xsl:for-each select="PD1_PatientDemographic">        <xsl:call-template name="PD1" />      </xsl:for-each>      <xsl:call-template name="PV1" />      <xsl:for-each select="PV2_PatientVisitAdditionalInformation">        <xsl:call-template name="PV2" />      </xsl:for-each> This code simply calls segment template directly for required singular elements and in for-each loop for optional/repeating elements. And lastly, create BizTalk map (btm) that references message type specific XSL. It is essentially empty map with Custom XSL Path set to appropriate XSL: In the end, you will end up with one segment templates file that is referenced by many message type specific XSL files which in turn used by BizTalk maps. Once all segment maps are created they are widely reusable and all the rest work is very simple and clean.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • Top 5 Developer Enabling Nuggets in MySQL 5.6

    - by Rob Young
    MySQL 5.6 is truly a better MySQL and reflects Oracle's commitment to the evolution of the most popular and widelyused open source database on the planet.  The feature-complete 5.6 release candidate was announced at MySQL Connect in late September and the production-ready, generally available ("GA") product should be available in early 2013.  While the message around 5.6 has been focused mainly on mass appeal, advanced topics like performance/scale, high availability, and self-healing replication clusters, MySQL 5.6 also provides many developer-friendly nuggets that are designed to enable those who are building the next generation of web-based and embedded applications and services. Boiling down the 5.6 feature set into a smaller set, of simple, easy to use goodies designed with developer agility in mind, these things deserve a quick look:Subquery Optimizations Using semi-JOINs and late materialization, the MySQL 5.6 Optimizer delivers greatly improved subquery performance. Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution. Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval. Internal tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions. select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)from customer, orders, lineitemwhere o_orderkey in (                select l_orderkey                from lineitem                group by l_orderkey                having sum(l_quantity) > 313  )  and c_custkey = o_custkey  and o_orderkey = l_orderkeygroup by c_name, c_custkey, o_orderkey, o_orderdate, o_totalpriceorder by o_totalprice desc, o_orderdateLIMIT 100;What does this mean for developers?  For starters, simplified subqueries can now be coded instead of complex joins for cross table lookups: SELECT title FROM film WHERE film_id IN (SELECT film_id FROM film_actor GROUP BY film_id HAVING count(*) > 12); And even more importantly subqueries embedded in packaged applications no longer need to be re-written into joins.  This is good news for both ISVs and their customers who have access to the underlying queries and who have spent development cycles writing, testing and maintaining their own versions of re-written queries across updated versions of a packaged app.The details are in the MySQL 5.6 docs. Online DDL OperationsToday's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generationrequirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. For example, when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, and most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE online DDL syntax additions:  CREATE INDEX DROP INDEX Change AUTO_INCREMENT value for a column ADD/DROP FOREIGN KEY Rename COLUMN Change ROW FORMAT, KEY_BLOCK_SIZE for a table Change COLUMN NULL, NOT_NULL Add, drop, reorder COLUMN Again, the details are in the MySQL 5.6 docs. Key-value access to InnoDB via Memcached APIMany of the next generation of web, cloud, social and mobile applications require fast operations against simple Key/Value pairs. At the same time, they must retain the ability to run complex queries against the same data, as well as ensure the data is protected with ACID guarantees. With the new NoSQL API for InnoDB, developers have allthe benefits of a transactional RDBMS, coupled with the performance capabilities of Key/Value store.MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated these results here. How to get started with Memcached API to InnoDB is here. New Instrumentation in Performance SchemaThe MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and Developer problems.  New instrumentations include: Statements/Stages What are my most resource intensive queries? Where do they spend time? Table/Index I/O, Table Locks Which application tables/indexes cause the most load or contention? Users/Hosts/Accounts Which application users, hosts, accounts are consuming the most resources? Network I/O What is the network load like? How long do sessions idle? Summaries Aggregated statistics grouped by statement, thread, user, host, account or object. The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema ona production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.The MySQL docs are an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. Better Condition Handling - GET DIAGNOSTICSMySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:Statement - which provides affected row count and number of conditions that occurredCondition - which provides error codes and messages for all conditions that were returned by a previous operation The addressable items for each are: The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution.  An example of how it is used might be:mysql> DROP TABLE test.no_such_table; ERROR 1051 (42S02): Unknown table 'test.no_such_table' mysql> GET DIAGNOSTICS CONDITION 1 -> @p1 = RETURNED_SQLSTATE, @p2 = MESSAGE_TEXT; mysql> SELECT @p1, @p2; +-------+------------------------------------+| @p1   | @p2                                | +-------+------------------------------------+| 42S02 | Unknown table 'test.no_such_table' | +-------+------------------------------------+ Options for leveraging the MySQL Diagnotics Area and GET DIAGNOSTICS are detailed in the MySQL Docs.While the above is a summary of some of the key developer enabling 5.6 features, it is by no means exhaustive. You can dig deeper into what MySQL 5.6 has to offer by reading this developer zone article or checking out "What's New in MySQL 5.6" in the MySQL docs.BONUS ALERT!  If you are developing on Windows or are considering MySQL as an alternative to SQL Server for your next project, application or shipping product, you should check out the MySQL Installer for Windows.  The installer includes the MySQL 5.6 RC database, all drivers, Visual Studio and Excel plugins, tray monitor and development tools all a single download and GUI installer.   So what are your next steps? Register for Dec. 13 "MySQL 5.6: Building the Next Generation of Web-Based Applications and Services" live web event.  Hurry!  Seats are limited. Download the MySQL 5.6 Release Candidate (look under the Development Releases tab) Provide Feedback <link to http://bugs.mysql.com/> Join the Developer discussion on the MySQL Forums Explore all MySQL Products and Developer Tools As always, thanks for your continued support of MySQL!

    Read the article

  • How and when to use UNIT testing properly

    - by Zebs
    I am an iOS developer. I have read about unit testing and how it is used to test specific pieces of your code. A very quick example has to do with processing JSON data onto a database. The unit test reads a file from the project bundle and executes the method that is in charge of processing JSON data. But I dont get how this is different from actually running the app and testing with the server. So my question might be a bit general, but I honestly dont understand the proper use of unit testing, or even how it is useful; I hope the experienced programmers that surf around StackOverflow can help me. Any help is very much appreciated!

    Read the article

  • CodePlex Daily Summary for Saturday, June 09, 2012

    CodePlex Daily Summary for Saturday, June 09, 2012Popular ReleasesARCBots API Program: ARCBots API Program v3 STABLE: Stable Release, you can find the update notes in the source code.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.RULI Chain Code Image Generator: RULI Chain Code BW Image Generator v. 0.4: added features: - 3x3 bitmask support - 7x7 bitmask support - app icon added some refactoring for later library-creationThe Chronicles of Asku: Alpha Test v1.1: Welcome to the Chronicles of Asku alpha test. The current state of the game is 2 tomb floors, level 10 cap, and almost all core systems in the game, excluding a store that you buy armor in. This is just a test for downloading the game, and severe bug hunts... INSTRUCTIONS: When you download the folder, right click within the folder and select EXTRACT ALL Run Setup.exe Follow any on screen instructions DO NOT TRY TO LOAD A CHARACTER IF YOU HAVE NEVER SAVED ONE BEFORE Patch 1.1 -Added a...Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...DotNetNuke® Links: 06.02.01: Added new DNN 6.2.0 beta social feature "friends" BugfixesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...LiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...New ProjectsAdvanceWars: Advance Wars For NetARCBots API Program: This is a simple API program designed for quick use for the ARCBots API.AutoUpdaterdotNET: AutoUpdater.NET is a class library that allows .net developers to easily add auto update functionality to their project.C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite contains a set of tests to aid in verifying compiler, library, and runtime behaviors as specified in the C++ AMP open specification. DynamicObjectProxy: DOP (Dynamic Object Proxy) é uma biblioteca que permite que qualquer método de qualquer objeto possa ser interceptado através de um proxy dinâmico. Ao interceptar um método, pode-se decorar o objeto, alterando ou recuperando informações sobre o seu comportamento. Extremamente útil para logs, verificações de transações, etc. DOP (Dynamic Object Proxy) is a library that contains classes that makes it possible that any method of any object can be intercepted by a dynamic proxy. By interceptin...EAIP: ???????? Flatland: Artificial life, or A-Life, is a broad and ever emerging field that has found its applications in almost any field: economics, medicine, traffic planning, shopping habits, patterns in music. And it is evident that work with modelling logical life in artificial environments is a necessity for the future software developer. Abbots novella "Flatland: A Romance of Many Dimensions" describes the two dimensional world "Flatland", where life is geometrical shapes that have human-like emotions, but...GameAudioSystem: Simple Audio Game System written with OpenAL to be used with Ogre3D.HyperionSS2P - Simple scan to pdf solution: Simple scan to pdf solution.Issue Impala: Issue Impala is a powerful and elegant issue tracker.KoekyGL Wrapper: A .net wrapper for OpenGL aimed at making it easy to use OpenGL.Marik Sample Project: This is a sample by KitMongoDB Managment Client: Development MongoDB Web client. HTML 5 jQuery. One page web application. When Windows 8 metro is released then Metro style application To.MongoDB.Dynamic: MongoDB.Dynamic is a personal project that I’ve started when I was developing my first application targeting MongoDB as DBMS, using the “official driver” MongoDB.Driver, supported by 10gen. The objective is to provide a lightweight library with some interesting features that speed up development for desktop/web applications that accesses MongoDB databases. MongoDB.Dynamic is oriented to interfaces. You don’t need to create concrete classes of your entities, all you need to do is setup your...My Google Workspace: An easy way to create documents search at Google and read your emails and Much moreqzgfjava: git????,?android??“??”???SP Sticky Notes: SP Sticky Notes allows your users to add sticky notes to a page on your SharePoint site.Weather3: It's Metro Style AppXNA Scumm: This is a rewrite of the ScummVM engine using XNA. ScummVM is an engine that runs old school LucasArts graphical adventure games. It is written completely in C# and the first version will run on PC and Xbox 360. A Windows phone version will probably follow. Of course, you will need to own the original games in order to use it. I will start my work by The Secret of Monkey Island, more specifically the VGA CD version. Monkey Island 2: LeChuck's Revenge and Indiana Jones 4 and Fate of Atl...YoG Community Game: This project is a game, being developed by several members of the Yogscast Community Forums. It is a top-down shooter, based around the protagonist 'Joe', and his adventures through TV shows every day.

    Read the article

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

  • Getting PHP to work with apache to run .php files through browser [closed]

    - by Kevin Duke
    I have VPS running Debian 5.0 (I think) and I would like to get it to run PHP files. I was told it needed to be configured with Apache. I tried entering the command apt-get install apache2 php5 libapache2-mod-php5. But there was no change. Console output: http://pastebin.com/sVMWq6mA This is everything in my /etc/apache2/mods-enabled: http://img35.imageshack.us/img35/6474/modsb.jpg My webserver can be accessed here: http://206.217.223.136/test/ In my test.php file I have the code : <?php phpinfo(); ?> but instead of displaying the page, it tries to download it. How can I fix this?

    Read the article

  • Showing schema.org aggregate rating in Google rich snippets

    - by nickh
    After adding schema.org microdata markup for reviews and aggregate ratings, I expected review and rating information to show up in rich snippets. Unfortunately, neither are being shown. Google's Structured Data Testing Tool finds the microdata, and there're no errors or warnings on the page. Any idea what's wrong with the microdata markup? Example 1: Live Page: http://www.shelflife.net/ljn-thundercats-series-3/bengali Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fljn-thundercats-series-3%2Fbengali Example 2: Live Page: http://www.shelflife.net/star-wars-mighty-muggs/asajj-ventress Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fstar-wars-mighty-muggs%2Fasajj-ventress

    Read the article

  • Direct3D9 application won't write to depth buffer

    - by DeadMG
    I've got an application written in D3D9 which will not write any values to the depth buffer, resulting in incorrect values for the depth test. Things I've checked so far: D3DRS_ZENABLE, set to TRUE D3DRS_ZWRITEENABLE, set to TRUE D3DRS_ZFUNC, set to D3DCMP_LESSEQUAL The depth buffer is definitely bound to the pipeline at the relevant time The depth buffer was correctly cleared before use. I've used PIX to confirm that all of these things occurred as expected. For example, if I clear the depth buffer to 0 instead of 1, then correctly nothing is drawn, and PIX confirms that all the pixels failed the depth test. But I've also used PIX to confirm that my submitted geometry does not write to the depth buffer and so is not correctly rendered. Any other suggestions?

    Read the article

  • Testing on Device Other Than the Known Brand Question (Local and Imported Phone Question)

    - by David Dimalanta
    I have a question. When testing a device by using Eclipse, it's easy to install and add device software with these specific brands commonly used in game testing like Samsung, Google, T-Mobile, and HTC; according to the Android Developers website. What if I'm using other brands that runs on Android to test the program via Eclipse (i.e. MyPhone, Starmobile), what should I look for to download in order to enable testing phones that those brands are using other than the brands that are known and commonly used: model number or simply brand? Here's some examples of these brands other than the brands we've known that runs on Android: Starmobile Engage 7 (http://www.lazada.com.ph/Starmobile-Engage-7-Android-40-4GB-with-Wi-Fi-Black-Starmobile-Mercury-B201-COMBO-39833.html/) My|Phone A898 Duo (http://www.myphone.com.ph/#!a898-duo/c1yt) Also, take note that I'm a Filipino programmer working at the Philippines to test our local smartphones for the created Android game or app. Hope you can understand me for my help.

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • permanently load module

    - by Radu
    I have a Compaq Presario CQ-61 320SQ, I am using Ubuntu 10.04 because after update to 10.10 my mouse and touchpad won't work, network won't work, sound won't work ... (I managed to fix most of them after almost a month of googling, but not all, my 2 Desktops have no problem with 10.10) so I decided to switch back to 10.04, where I have a problem: My broadband speed is very low beacause of the kernel module r8169, I downloaded the good module r8101 and every time the computer boots have a rc.local entry to fix this. Question: Can I load the modul permanently from a specific location. I heard about /etc/modules but there I need the module name, but I have to load it from a specific path (where is the default path for that) Thank you. So I studied the script: It creates the file r8101.ko in /lib/modules/uname -r/kernel/drivers/net so I think as long as nobody will delete that file, and I don't update the kernel, maybe adding r8108 to /etc/modules will work, and add r8169 to blacklist ... I will give it a try. EDIT2: So I added r8101 to /etc/modules and blacklist r8169 to /etc/modprobe.d/blacklist.conf It still uses the old module, lsmod prints: radu@adu:~$ lsmod | grep r8 r8101 67626 0 r8169 34108 0 mii 4381 1 r8169 EDIT: the module is loaded using this script that came with it: #!/bin/sh # invoke insmod with all arguments we got # and use a pathname, as insmod doesn't look in . by default TARGET_PATH=/lib/modules/`uname -r`/kernel/drivers/net echo echo "Check old driver and unload it." check=`lsmod | grep r8169` if [ "$check" != "" ]; then echo "rmmod r8169" /sbin/rmmod r8169 fi check=`lsmod | grep r8101` if [ "$check" != "" ]; then echo "rmmod r8101" /sbin/rmmod r8101 fi echo "Build the module and install" echo "-------------------------------" >> log.txt date 1>>log.txt make all 1>>log.txt || exit 1 module=`ls src/*.ko` module=${module#src/} module=${module%.ko} if [ "$module" == "" ]; then echo "No driver exists!!!" exit 1 elif [ "$module" != "r8169" ]; then if test -e $TARGET_PATH/r8169.ko ; then echo "Backup r8169.ko" if test -e $TARGET_PATH/r8169.bak ; then i=0 while test -e $TARGET_PATH/r8169.bak$i do i=$(($i+1)) done echo "rename r8169.ko to r8169.bak$i" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak$i else echo "rename r8169.ko to r8169.bak" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak fi fi fi echo "Depending module. Please wait." depmod -a echo "load module $module" modprobe $module echo "Completed." exit 0

    Read the article

  • Error mounting VirtualBox shared folders in an Ubuntu guest

    - by skaz
    I have Ubuntu 10 as the guest OS on a Windows 7 machine. I have been trying to setup shares through VirtualBox, but nothing is working. First, I create the share in VirtualBox and point it to a Windows folder. Then I try to mount the drive in Linux, but I keep getting /sbin/mount.vboxsf: mounting failed with the error: Protocol error I have read so many solutions to this, but none seem to work. I have tried: Using the mount.vboxsf syntax Reinstalling VBox additions Rebooting Enabling and trying as root account I made a share called "Test" in VBox Shared folders. Then I made a directory in ubuntu named "test2". Then I tried to execute this command: sudo mount -t vboxsf Test /mnt/test2 Any other ideas?

    Read the article

  • Juju bootstrap, install

    - by Robert G.
    I would like to test MAAS + JUJU + OpenStack (I followed the documentation on maas.ubuntu.org) I already made a test environment: 1 MAAS server wich will also run JuJu 10 KVM servers for Openstack The KVM servers are already in "ready" state in MAAS. I would like to set up JuJu but i could not which is drives me crazy. My environments.yaml: environments: maassrv: type: maas maas-server: 'http://${192.168.1.116}/MAAS/' maas-oauth: 'my-key-from-maas' authorized-keys-path: /root/.ssh/id_rsa.pub admin-secret: 1234 default-series: trusty When I run "juju status -e maassrv` : ERROR Unable to connect to environment "maassrv". Please check your credentials or use 'juju bootstrap' to create a new environment. Error details: environment "maassrv" not found OK, it's right, so i should run juju bootstrap -e maassrv: ERROR environment "maassrv" not found When i run the command without the -e switch: error: no environment specified So, I am stuck here, I already added the required ssh keys to maas too. I ran out of ideas why it isn't working.

    Read the article

  • How to pass GET parameters to rewritten URL?

    - by Jakobud
    I have an .htaccess rewrite rule like this: RewriteCond %{SCRIPT_FILENME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteRule ^search/(.*)$ search.php?q=$1 What this does is, if someone visits http://www.mysite.com/search/test the URI that is really processed is http://www.mysite.com/search.php?q=test. Now, if I try to pass an extra random GET parameter to my rewritten URL, the parameter is ignored. So if I try to do visit here: http://www.mysite.com/search/whatever?extra=true The parameter extra is ignored. It doesn't seem to get passed at all. Can this problem be fixed? If so, how?

    Read the article

  • CodePlex Daily Summary for Monday, March 26, 2012

    CodePlex Daily Summary for Monday, March 26, 2012Popular ReleasesQuick Performance Monitor: Version 1.8.1: Added option to set main window to be 'Always On Top'. Use context (right-click) menu on graph to toggle..Net Rest API for Kayako Fusion 4: kayako_rest_api_2012.03.26: Added ability to search for users via organisation/email. This is much quicker than getting all users then filtering.GeoMedia PostGIS data server: PostGIS GDO 1.0.1.1: This is a new version of GeoMeda PostGIS data server which supports user rights. It means that only those feature classes, which the current user has rights to select, are visible in GeoMedia. Issues fixed in this release Fixed a problem when gdo.gfeaturesbase table has been visible in GeoMedia. To hide this table, run the previous version of Database Utilities and uncheck this table in the feature classes list. Then load the new release. Fixed a problem when coordinate system list has not...Silverlight 4 & 5 Persian DatePicker: Silverlight 4 and 5 Persian DatePicker: Added Silverlight 5 support.Y.Music: Y.Music v.1.0: ?????? ?????? ?????????. ????????: ????? ???? ????, ??????? ?????? ? ??? - Beta.Asp.NET Url Router: v1.0: build for .net 2.0 and .net 4.0menu4web: menu4web 0.0.3: menu4web 0.0.3ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Final: This release installs both the ArcGIS Editor for OSM Server Component and/or ArcGIS Editor for OSM Desktop components. The Desktop tools allow you to download data from the OpenStreetMap servers and store it locally in a geodatabase. You can then use the familiar editing environment of ArcGIS Desktop to create, modify, or delete data. Once you are done editing, you can post back the edit changes to OSM to make them available to all OSM users. The Server Component allows you to quickly create...Craig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...XNA Electric Effect: Jason Electric Effect v1.1: The library now includes 3 effect types: Line, Bezier, CatmullRom, providing different look and feel.DotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Change default Share-site group SharePoint Online (Office 365): Change default Share-site group SharePoint Online: As default when we share a site collection or site with external users, SharePoint Online show default SharePoint groups which are Visitors and Members. By using this feature, you will get a link which you can use to customize the default groups to your custom groups and other default groups.Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.Working with Social Data: Tag Cloud Customization: http://swatipoint.blogspot.com/2011/10/sharepoint-2010-social-featurestagging.htmlWebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...New ProjectsASIVeste: No description availableAuthor-it Sync Headings Plug-in: Author-it plug-in that allows the user to synchronize the Print, Help, and Web headings with the Description for each selected topic.BlogEngine.Web: BlogEngine.Web is a BlogEngine.Net converted to use Web Application Project model (WAP).Code Writer Helper: A quick solution to help code generator writers.CodeUITest: Practise CodeUI automation.DAX Studio: Excel Add-In for PowerPivot and Analysis Services Tabular projects that will include an Object Browser, query editing and execution, formula and measure editing ,syntax highlighting, integrated tracing and query execution breakdowns.Fated: Fated is an isometric-viewed, tile-based tactical RPG developed in C# using XNA to be deployed to XBox. This includes a character generation core, graphics engine, and storyline parser.iSufe???: “iSufe???”??????????????????????????????。???????????、????、?????????,??????????????。??,????iOS/Android/WAP???????,???????????????。????GPLv2??,?????????????。Kinect test project: Basic project for my kinect test applicationLoLTimers: LoLTimers by Christian Schubert 2012. Version 1.0.0.0 This is a small app that lets you keep track of the most important creep camp cooldowns. Developed in Visual Studio C#.London Priority Security Services Ltd: LPSS - London Priority Security Services LtdNMCNPM: code nhóm nmcnpmOffice 365 Anonymous Access Manager Sandbox Solution: The sandbox solution enables you to manage anonymous access of lists on Office 365. It allows setting read, modifying and adding rights. Additionally the configuration page adds the necessary events to be able to use moderations, when anonymous users are creating a list item. The second feature in the solution enables anonymous access on blogs sites, it allows to enable anonymous users to comment on a blog.Office 365 Google Analytics: This sandbox solution enables google analytics everywhere in your site collection. This allows you to use the google analytics reporting on all your Office 365 sites.Office 365 Mobile Access Enables for Public Sites & Blogs: This sandbox solution enables mobile access on Office 365 sites.OwnMFCSolution: MFC test solution.People Data Generator: Need to load a bunch of test data to represent people (e.g. name, address, phone, etc.)? Wish it looked realistic? People Data Generator is what you need. Features: *Realistic names *Realistic addresses, using real towns and postal codes *Realistic phone numbers and emails *Very ExtensibleProventi: Met dit programma kan je je voorraad van je onderneming beheren. Dit programma zal in eerste instantie gebruikt worden binnen de minionderneming Proventi. Het programma is geschreven in VB.Net en maakt gebruik van SQL Server CE voor de gegevensopslag.qCommerce: ??????????? ???????, ???????????? ??? ????? qSoftwareQuanLyOTo: Ð? án môn h?c C# qu?n lý garage ô tôRoyaSoft.ir Resources: i am use this project for my personal web site :)SGPF: The team does not have nothing to declare here!SharePoint 2010 Autocomplete Lookup Field: Autocomplete Lookup field allows type ahead functionality while entering lookup values in list items.Sharing Photos using SignalR: An MVC application using SignalR that can be used to share photos between friends and get realtime updates. An user connected to the website can upload a photo which will be automatically broadcasted to all clients connected at that point.Sistema Hoteleiro: Sistema Hoteleiro é o meu trabalho final da disciplina Arquitetura de Aplicativos Ambiente .NET da 4a turma do curso de pós-graduação de especialização em Arquitetura de Sistemas Distribuídos oferecido pelo Instituto de Educação Continuada da Pontifícia Universidade Católica de MSoftware Revolution: This project is core information site of Software Revolution named company which provides software solutions.tgryi: tgyrivbWSUS: Really decide which and when to install updates from a centralized server, globally or per host : - installation schedule - updates to install - email results - configure extra Windows Update parameters Works with WSUS server or Windows Update from Microsoft. See README.txt for more informations ! :) Current official website is http://sourceforge.net/projects/vbwsus/XamlCombine: Combines multiple XAML resource dictionaries in one. Replaces DynamicResources to StaticResources. And sort them in order of usage.XNA Shader-free Linear Burn effect: Sample demonstrating a Linear Burn effect in XNA without using custom shaderszhCms: zhCmszhtest: my test project

    Read the article

  • Python C API return more than one value / object without building a tuple [migrated]

    - by Grisu
    I got the following problem. I have written a C-Extension to Python(2.7 / 3.2) to interface a self written software library. Unfortunately I need to return two values from the function where the last one is optional. In Python I tried def func(x,y): return x+y, x-y test = func(13,4) but test is a tuple. If I write test1,test2 = func(13,4) I got both values separated. Is there a possibility to return only one value without unpacking the tuple, i.e. the second(,.. third, ..fourth) value gets neglected? And if such a solution existst, how does it look for the C-API? Because return Py_BuildValue("ii",x+y,x-y); results in a tuple as well.

    Read the article

  • Landed Cost Management Integration with OPM Financials

    - by Robert Story
    Upcoming WebcastTitle: Landed Cost Management Integration with OPM FinancialsDate: April 21, 2010 Time: 11:00 am EDT, 9:00 am PDT, 8:00 am MDT Product Family: EBS: Process Manufacturing Summary This one-hour session will present setup overview and detailed steps for a test case, and is recommended for functional users who are using OPM Financials module with an actual costing method. Topics will include: Overview on Landed Cost Management functionality Setup steps and a test case Some technical considerations Documentation and other reference materials available A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Oracle FLEXCUBE delivers 'Bank-in-a-Box' with Oracle Database Appliance

    - by margaret hamburger
    Another great example of how Oracle Database Appliance simplifies the deployment of high availability database solutions making it easy for Oracle Partners and ISVs to deliver value added solutions to customers on a simple, reliable and affordable database platform. Oracle FLEXCUBE Universal Banking recently announced that it runs on Oracle Database Appliance X3-2 to deliver mid-size banks a compelling banking-in-a-box solution. With this certification, banks can benefit from a low-IT-footprint, high-performance, full-scale banking technology that is engineered to support end-to-end business requirements. In a recent performance test of Oracle FLEXCUBE Universal Banking on Oracle Database Appliance X3-2, the system managed more than 2.6 million online transactions in 60 minutes. This equated to roughly 744 transactions per second with an average response time of 156 milliseconds for 98 percent of the transactions. Likewise, the solution completed end-of-month batch processing for 10 million customer accounts in 123 minutes during the performance test.  Learn more about Oracle Database Appliance Solution-in-a-Box.

    Read the article

  • Download Speed is 0.12 Mbps when tested with servers in U.S, but it is 0.76 Mbps when tested with local servers, normal? [closed]

    - by Graviton
    Feeling that my ISP is cheating my money ( My subscription package is 1 Mbps), I did a speed test on my internet connection using www.speedtest.net. I tried to test the connection speed on two servers, one local ( Malaysia), another in U.S. I found that while the upload speed remain constant, but the download speed is different; 0.76 Mbps for servers in Malaysia, and 0.12 Mbps for servers in U.S. I called the ISP, and they blamed it on the intercontinental signal lost. But how can it be that the speed differs by that much? If it really differs by that much than we should always take a grain of salt of what is advertised as the broadband speed because the advertised speed is not the speed we are getting. No?

    Read the article

  • GA UA codes for testing site - set up

    - by Drew
    Anyone know the process for using a GA live UA code to test a site in development. I.e. I have a live site with a GA UA code attached, tracking live traffic data etc e.g. UA-123456. I've been told that there is a way to produce another code associated to the primary code to use on the testing version of my live site e.g. test code could be UA-123457. Can anyone shed some light on this? If not possible should I just set up a completely separate GA account for my testing site?

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >