Search Results

Search found 13501 results on 541 pages for 'dimensional model'.

Page 363/541 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Secure Coding Practices in .NET

    - by SoftwareSecurity
    Thanks to everyone who helped pack the room at the Fox Valley Day of .NET.   This presentation was designed to help developers understand why secure coding is important, what areas to focus on and additional resources.  You can find the slides here. Remember to understand what you are really trying to protect within your application.  This needs to be a conversation between the application owner, developer and architect.  Understand what data (or Asset) needs to be protected.  This could be passwords, credit cards, Social Security Numbers.   This also may be business specific information like business confidential data etc.  Performing a Risk and Privacy Assessment & Threat Model on your applications even in a small way can help you organize this process. These are the areas to pay attention to when coding: Authentication & Authorization Logging & Auditing Event Handling Session and State Management Encryption Links requested Slides Books The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure Software Threat Modeling Writing Secure Code The Web Application Hackers Handbook  Secure Programming with Static Analysis   Other Resources: OWASP OWASP Top 10 OWASP WebScarab OWASP WebGoat Internet Storm Center Web Application Security Consortium Events: OWASP AppSec 2011 in Minneapolis

    Read the article

  • Rant - Why is Windows Azure not available in Africa?

    - by Allan Rwakatungu
    Yesterday at the .NET user group meeting in Kampala Uganda  I gave a talk on cloud computing with Windows Azure  (details will be in my next blog post). The guys where excited. Without owning they own inftrastucture and at low cost they can build scalable , highly available applications. Not quite. Azure accounts are only available to people in particular countries - none from Africa. I attended PDC in 2008 when Microsoft unleashed Windows Azure. One of the case studies to show the benefits ofr cloud computing was a project in Africa for an education service in Ethiopia. The point they where making was that the cloud was perfect for scenarios where computing infrastructure is not sophiscated, like Ethiopia. Perfect , i thought. So i got my beta account from PDC and started playing around in the cloud. Then Azure goes live , my beta account does not work any more and I cant pay because am from Uganda. Microsoft , this sucks. I dont know the reasons for Microsoft doing this, but am sure we can work out something. We in Africa need the cloud more than anybody else in the world. Setting up data centers that are higly scalable and available for our startups is not an option we have. But we also cant pay for cloud computing with Microsoft. Microsoft, we know we are a tiny insigficant market for a company your size, but your excluding us only continues to widen the digital divide. Microsoft , how about you have a reseller model for cloud computing. Instead of trying to deal direclty with each client you have local partners who help you sell and bill your cloud services. I think that would lead to Windows Azure being available in Africa. I can help you resell in Uganda.

    Read the article

  • What are the common mistakes in 'tailored Scrum approaches'?

    - by Clark Gable
    I have seen this before. Management wants to be agile and be scrummified, but does not want to step out of the status quo. My latest observation is no different; here, the Scrum is 'tailored' to the organization; specifically into a weird many-people-process. The diagram showing the different participants. I am putting together a document listing why this will not work. Here are the obvious ones: 1. There are product owner agents (an obvious WTF), who report to the product owner: causing dilution of decision making capability 2. There is a role that looks similar to a manager in the traditional approach - development manager: an obvious attempt at command-and-control model 3. The ScrumMaster's role includes collecting timesheets, which are used to track progress instead of burndown charts: detrimental to agile's efforts to build teams with motivated individuals Leaving the question "how would you convince the management?", my question is more at, "what else do you see as failures in this/similar 'tailored Scrum approaches'? EDIT: The diagram might use a few more details 1. The development manager is not part of the development team, with not very clearly defined responsibilities, except: developer performance assessemnt, recruitment, etc., 2. There are more than two teams (with ScrumMaster+development manager+dev team) with the same product owner for all teams!

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • CodePlex Daily Summary for Thursday, November 29, 2012

    CodePlex Daily Summary for Thursday, November 29, 2012Popular ReleasesJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesBlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.1: NugetNuGet BlogRead the release blog post for 4.11.0. Read the release blog post for 4.11.1. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Pr...New ProjectsConvection Game API: A basic game API written in C# using XNA 4.0CS^2 (Casaba's Simple Code Scanner): Casaba's simple code scanner is a tool for managing greps to be run over a source tree and correlating those greps to issues for bug generation. CSparse.NET: A Concise Sparse Matrix Package for .NETDocument Generation Utility: This is about extracting different XML entities and wrapping up with jumbled legal English alphabets and outputs as per File format defined in settings.DuinoExplorer: file manager, http, remote, copy & pasteGenomeOS: An experimental x86 object-oriented operanting system programmed in C/C++ and x86 intel assemblyHush.Project: DatabaseInvi: NALibreTimeTracker Client: Free alternative to the original TimeTracker Client.MO Virtual Router: Virtual Router For Windows 8My Word Game Publisher: This is a full web application which is developed in .NET 2.0 using c#, xml, MS SQL, aspx.MyWGP XmlDataValidator: This is designed and developed (in Silverlight 2, C#) to validate the requirements of data in xml files: the type & size of data, and the requirement status. It allows a user to choose the type of data, enter min & max size, and check the requirement status for data elements.OMX_AL_test_environment: OMX application layer simulation/testing environmentOrchard Coverflow: An Orchard CMS module that provides a way to create iTunes-like coverflow displays out of your Media items.SharePoint standart list form javascript utility (SPListFormUtility): SPListFormUtility is a small JavaScript library, that helps control the appearance and behavior of standart SharePoint list forms. SharePoint 2010, 2013 supportShuttle Core: Shuttle Core is a project that contains cross-cutting libraries for use in .net software development. The Prism architecture lack for the Interactions!: The Prism architecture lack for the Interactions! (Silverlight) The defect opens a popups more then once and creates memory leaks. Example of the lack here.YouCast: YouCast (from YouTube and Podcast) allows you to subscribe to video feeds on YouTube* as podcasts in any standard podcatcher like Zune PC, iTunes and so forth.ZEFIT: Zeiterfassungstool als Projekt im 4.Lehrjahr als Informatiker an der GIBB in Bern????: ?Windows Phone?????,??Windows Phone??????????????。

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Can't shutdown Ubuntu (Wubi Installation)

    - by zsgre3nd3s7
    I downloaded Ubuntu 12.04 using Wubi. The installation went correctly and everything was working fine until I tried to shutdown. I clicked shutdown and then Ubuntu started shutdown, but as soon as I saw the Ubuntu logo with blank dots under, it froze. I had to perform a hard shutdown. After booting my computer back and going into Ubuntu, I tried shutting it down again but this time it took me on a black page with lots and lots of log writing on the screen and after a little while, it stopped writing stuff. I was able to input characters using the keyboard and everything, but it never shutdown. I had to perform a hard shutdown again. Now it always gives me a Ubuntu logo and freezes. What should I do? I know hard shutdowns are bad and want to avoid them. Is there anyway to make shutdowns work? I tried a reboot and it also froze on the Ubuntu logo. Sony VAIO Model E SVE17115FDB Laptop. Graphic card - AMD RADEON HD 7650M (and it installed correctly in Ubuntu). BIOS - H2O Bios. Processor - Intel i7-3612QM. Edit: I only installed the AMD/ATI proprietary drivers FGLRX, not the AMD/ATI post release drivers because they keep showing an error message. Here is jockey.log. Edit 2: Here is the log that i mentioned before that appeared on my screen, it appeared after i tried reinstalling my AMD driver but failed so i reinstalled the other one. Sorry for the quality i took those pictures with my phone.

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • Priority Manager&ndash;Part 1- Laying out the plan

    - by Patrick Liekhus
    Now that we have shown the EDMX with XPO/XAF and how use SpecFlow and BDD to run EasyTest scripts, let’s put it all together and show the evolution of a project using all the tools combined. I have a simple project that I use to track my priorities throughout the day.  It uses some of Stephen Covey’s principles from The 7 Habits of Highly Effective People.  The idea is to write down all your priorities the night before and rank them.  This way when you get started tomorrow you will have your list of priorities.  Now it’s not that new things won’t appear tomorrow and reprioritize your list, but at least now you can track them.  My idea is to create a project that will allow you manage your list from your desktop, a web browser or your mobile device.  This way your list is never too far away.  I will layout the data model and the additional concepts as time progresses. My goal is to show the power of all of these tools combined and I thought the best way would be to build a project in sequence.  I have had this idea for quite some time so let’s get it completed with the outline below. Here is the outline of the series of post in the near future: Part 2 – Modeling the Business Objects Part 3 – Changing XAF Default Properties Part 4 – Advanced Settings within Liekhus EDMX/XAF Tool Part 5 – Custom Business Rules Part 6 – Unit Testing Our Implementation Part 7 – Behavior Driven Development (BDD) and SpecFlow Tests Part 8 – Using the Windows Application Part 9 – Using the Web Application Part 10 – Exposing OData from our Project Part 11 – Consuming OData with Excel PowerPivot Part 12 – Consuming OData with iOS Part 13 – Consuming OData with Android Part 14 – What’s Next I hope this helps outline what to expect.  I anticipate that I will have additional topics mixed in there but I plan on getting this outline completed within the next several weeks.  Thanks

    Read the article

  • CQRS applicability when some commands need to block the UI

    - by regularfry
    I am working on an app which I would dearly love to transition from a fairly traditional layered architecture to CQRS, for a number of reasons, not least fo which is that having a robust event log will make adding a couple of feature requests I can see barrelling towards me trivial to accomodate. Now, I have a conceptual problem: of around 40 commands the user can initiate, there are three which the user needs to be sure have successfully completed before the UI lets them do anything else. Everything else fits into the "submit a request, query for success later" model, except for these three commands. How is this handled in CQRS-land? Do I separate the three blocking commands to effectively a third service, so I have Commands, Queries, and BlockingCommands? Do I have a two-stage event processor with an in-request blocking first stage which only gets used for the blocking commands? Does the existence of these three commands mean that the whole idea of applying CQRS is invalid? Should I just pretend they aren't blocking and poll for success in the UI? I'm sure this must come up on other projects, how is it usually handled?

    Read the article

  • An experiment: unlimited free trial

    - by Alex.Davies
    The .NET Demon team have just implemented an experiment that is quite a break from Red Gate's normal business model. Instead of the tool expiring after the trial period, it now continues to work, but with a new message that appears after the tool has saved you a certain amount of time. The rationale is that a user that stops using .NET Demon because the trial expired isn't doing anyone any good. We'd much rather people continue using it forever, as long as everyone that finds it useful and can afford it still pays for it. Hopefully the message appearing is annoying enough to achieve that, but not for people to uninstall it. It's true that many companies have tried it before with mixed results, but we have a secret weapon. The perfect nag message? The neat thing for .NET Demon is that we can easily measure exactly how much time .NET Demon has saved you, in terms of unnecessary project builds that Visual Studio would have done. When you press F5, the message shows you the time saved, and then makes you wait a shorter time before starting your application. Confronted with the truth about how amazing .NET Demon is, who can do anything but buy it? The real secret though, is that while you wait, .NET Demon gives you entertainment, in the form of a picture of a cute kitten. I've only had time to embed one kitten so far, but the eventual aim is for a random different kitten to appear each time. The psychological health benefits of a dose of kittens in the daily life of the developer are obvious. My only concern is that people will complain after paying for .NET Demon that the kittens are gone.

    Read the article

  • A New Native Silverlight 4 Rich Text Editor Coming Up

    The eagerly awaited release of Silverlight 4.0 is now a fact and we have great news to share with you. Here at Telerik we are going to have a new addition to our Silverlight suite a brand new native Silverlight 4.0 rich text box. RadRichTextBox offers MS Word-like text editing and formatting capabilities which come with unmatched performance, paged and flow layout. The new control utilizes UI Virtualization and Recycling, easy to use API for accessing/modifying document and layout structure, and more. A CTP of RadRichTextBox is going to be released with the upcoming RadControls for Silverlight 2010.Q1 SP1. The official version is expected to be part of the Q2 2010 release. To illustrate better some of the new features lets see a short example of the document model in pure XAML: As we said above, the structure of the document is like the documents in WPF. In the ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Tools for game script / storyboard

    - by Pietro Polsinelli
    I am searching for a tool that will help in writing a game script. By "script" I mean the text core of a storyboard - without the drawing drafts, which may or may not be there (yet). What I'm thinking of will let write a piece of text of the script, define a simplified workflow from that step, and then define the text of next steps, and so on. Searching online, I found Inform http://inform7.com/ ("A Design System for Interactive Fiction Based on Natural Language") which in theory is exactly what I am searching for, but trying to use it it has this model of a space (a dungeon, a library) where you are picking up objects and exploring them. In my case I am designing more a Sims like game, the flow is entirely different. Considering non specific software, mind mapping tools miss the linearity of the process. What I am writing is a directed graph - simply a work-flow, but the way I want to design it is more text based than work-flow based. SO what I'm doing now is using a text editor, which I'll transform directly in code. Any suggestions?

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • CodePlex Daily Summary for Thursday, October 31, 2013

    CodePlex Daily Summary for Thursday, October 31, 2013Popular ReleasesTCPEssentials: TCPEssentials 1.0.0.1: Changed the name of method SplitArray to Split. Standardization is important...Community Forums NNTP bridge: Community Forums NNTP Bridge V54 (LiveConnect): This is the first release which can be used with the new LiveConnect authentication. Fixes the problem that the authentication will not work after 1 hour. Also a logfile will now be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt".WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)Community TFS Build Extensions: October 2013: The October 2013 release contains Scripts - a new addition to our delivery. These are a growing library of PowerShell scripts to use with VS2013. See our documentation for more on scripting. VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 Community TFS Build Manager VS2013 The Community TFS Build Managers for VS2010, 2012 and 2013 can also be found in the Visual Studio Gallery where upda...SuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.Mugen MVVM Toolkit: Mugen MVVM Toolkit 2.1: v 2.1 Added the 'Should' class instead of the 'Validate' class. The 'Validate' class is now obsolete. Added 'Toolkit.Annotations' to support the Mugen MVVM Toolkit ReSharper plugin. Updated JetBrains annotations within the project. Added the 'GlobalSettings.DefaultActivationPolicy' property to represent the default activation policy. Removed the 'GetSettings' method from the 'ViewModelBase' class. Instead of it, the 'GlobalSettings.DefaultViewModelSettings' property is used. Updated...NAudio: NAudio 1.7: full release notes available at http://mark-dot-net.blogspot.co.uk/2013/10/naudio-17-release-notes.htmlLayered Architecture Solution Guidance (LASG): LASG 1.0.1.0 for Visual Studio 2012: PRE-REQUISITES Open GAX (Please install Oct 4, 2012 version) Microsoft® System CLR Types for Microsoft® SQL Server® 2012 Microsoft® SQL Server® 2012 Shared Management Objects Microsoft Enterprise Library 6.0 (for the generated code) Windows Azure SDK (for layered cloud applications) Silverlight 5 SDK (for Silverlight applications) THE RELEASE This release only works on Visual Studio 2012. Known Issue If you choose the Database project, the solution unfolding time will be slow....DirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...ExtJS based ASP.NET Controls: FineUI v4.0beta1: +2013-10-28 v4.0 beta1 +?????Collapsed???????????????。 -????:window/group_panel.aspx??,???????,???????,?????????。 +??????SelectedNodeIDArray???????????????。 -????:tree/checkbox/tree_checkall.aspx??,?????,?????,????????????。 -??TimerPicker???????(????、????ing)。 -??????????????????????(???)。 -?????????????,??type=text/css(??~`)。 -MsgTarget???MessageTarget,???None。 -FormOffsetRight?????20px??5px。 -?Web.config?PageManager??FormLabelAlign???。 -ToolbarPosition??Left/Right。 -??Web.conf...CODE Framework: 4.0.31028.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.VidCoder: 1.5.10 Beta: Broke out all the encoder-specific passthrough options into their own dropdown. This should make what they do a bit more clear and clean up the codec list a bit. Updated HandBrake core to SVN 5855.Indent Guides for Visual Studio: Indent Guides v14: ImportantThis release has a separate download for Visual Studio 2010. The first link is for VS 2012 and later. Version History Changed in v14 Improved performance when scrolling and editing Fixed potential crash when Resharper is installed Fixed highlight of guides split around pragmas in C++/C# Restored VS 2010 support as a separate download Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.3 (mvc5): version 3.5.3 - support for mvc5 version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 ========================== - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js...Media Companion: Media Companion MC3.585b: IMDB plot scraping Fixed. New* Movie - Rename Folder using Movie Set, option to move ignored articles to end of Movie Set, only for folder renaming. Fixed* Media Companion - Fixed if using profiles, config files would blown up in size due to some settings duplicating. * Ignore Article of An was cutting of last character of movie title. * If Rescraping title, sort title changed depending on 'Move article to end of Sort Title' setting. * Movie - If changing Poster source order, list would beco...MoreTerra (Terraria World Viewer): MoreTerra 1.11.4: Release 1.11.4 =========== = Compatibility = =========== Updated to add the new tiles/walls in 1.2.1Emptycanvas: Emptycanvas v21: Maintenant plusieurs lumières possibles.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.0.7: This is a bug fix release, containing some important fixes! Fixed issue where Session 0 was not detected correctly, resulting in issues when attempting to display a UI when none was allowed Fixed Installation Prompt and Installation Restart Prompt appearing when deploy mode was non-interactive or silent Fixed issue where defer prompt is displayed after force closing multiple applications Fixed issue executing blocked app execution dialog from UNC path (executed instead from local tempo...BlackJumboDog: Ver5.9.7: 2013.10.24 Ver5.9.7 (1)FTP???????、2?????????????shift-jis????????????? (2)????HTTP????、???????POST??????????????????New ProjectsBF4Rcon.NET: BF4Rcon.NET is a .NET library for Battlefield 4's remote server administration interface.Command Processor: Command Processor with "Command Query Responsibility" prencipleDqyt.sk: a solution for sk project.Edit Visual Studio LoadTest Results' Name and Description: A simple addin to Visual Studio 2012 and Visual Studio 2010, which enables a LoadTest user to be able to give custom name and description for their loadtest.Grauers Set Metadata Properties: Use this to automatically set ISSUE, APPROVED BY and APPROVED DATEIkeCode WP7.1+ Toolkit: IkeCode WP7.1+ Toolkit is an initiative to create methods that facilitate the development of application.MongoDAO: MongoDB C#NB_Store Payment On Delivery Gateway: NB_Store Payment On Delivery Gateway allows you to add "Paymeny On Delivery" to your NB_Store webshop on DNN.occupie: // to doOffice and PDF API for .NET Components: As an independent .NET component, Spire.Office doesn't need Microsoft Office to be installed on System. It supports C# word, C# Excel and C# PDF.SharePoint Workflow Manager | Bulk Start or Terminate workflows: SharePoint Workflow Manager allows you bulk start or terminate SharePoint workflows on all items in a list.System App: The System App is a sleek, smart and open-source IT mapping and monitoring tool by Zalando.TCPEssentials: TCPEssentials is a C# Library that helps simplify the task of managing TCP connections, and also gives you a few tools that may be useful.Wangls: ???WindowSMART: WindowSMART 2013 and Home Server SMART 2013 are powerful hard disk and SSD health monitoring, reporting and alerting tools for Windows.

    Read the article

  • How to get a Toshiba L505 to boot USB or CD

    - by ShroudedCloud
    OK, have a Toshiba L505 (not sure of the extended model number) that got a virus on Windows and will no longer will boot into that, so I'm trying to revive it with Ubuntu. Problem is, when I tell it to boot into a USB image of 12.04 32- or 64-bit, 13.04 64-bit, or elementary Luna 32-bit, it gives me some screen with a copyright from 2000 for Intel, invariably spits out a "media not recognized" type of error and then says PXE-ROM exiting. Well, that's annoying. So I went in with a CD (12.04 x86_64 having tried 32-bit in the past as well). Boot menu, select, starts running. Seeing the loading screen for Ubuntu, going well... until it's not. Again, invariably, it fails. The CD drive will cease spinning at around the same time each time and then the laptop will stop doing everything altogether (at least, everything spins down and it goes quiet). As far as I can tell, it's not to do with what function is being loaded from the CD at the time (because that bit is variable). I'd love to be able to boot from USB (since it will be all but required going forward), but getting the CD to work would be wondrous too. Anyone have any ideas of where I can go from here to try to fix this? My friends and I have turned up nothing.

    Read the article

  • Webcam doesn't work in Ubuntu 12.10

    - by Kzhi
    I have Gembird cam68ut. On my Ubuntu 12.10 it shows black screen in cheese and guvcview. I tested it in win7, it works fine. Here what I found: It is a uvc compliant camera, I checked on the site: 18ec:3299 USB 2.0 PC Camera (model number QC3231) ArkMicro This webcam is report by lsusb: Bus 001 Device 004: ID 18ec:3299 Arkmicro Technologies Inc. Here is the output of dmesg | tail: uvcvideo: Found UVC 1.00 device USB2.0 PC CAMERA (18ec:3299) uvcvideo: UVC non compliance - GET_DEF(PROBE) not supported. Enabling workaround. input: USB2.0 PC CAMERA as /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.5/1-1.5:1.0/input/input17 usbcore: registered new interface driver uvcvideo USB Video Class driver (1.1.1) When I run cheese (or guvcview), here what I get in terminal: libv4l2: error turning on stream: No space left on device (cheese:11797): cheese-WARNING **: Internal data flow error. I tried it on different usb slots with the same results The Webcam's microphone works, I can record audio with it Guys, any thoughts on what can be done to make it work?

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • Oracle OpenWorld Session: “Business Driven Development with BPM: Lessons from the Real World”

    - by Ajay Khanna
    One of key values that BPM promises is “Business Empowerment”. People closest to the processes, who participate in the process every day, are the ones who know most about the process. These are the people who run day-to-day operations, people who triage customer issues, people who envision improvements and innovations. It is, therefore, imperative that when a company decides to use BPM technology to automate their business processes, business people take the driver’s seat. BPM is not an IT only project. Oracle BPM suite has been designed keeping this core tenet of BPM, Business Empowerment, in mind. The result is business user centered design of Process Composer. Process Composer is designed to let business users document their processes, analyze them using simulation, create web forms, specify business rules and even run them in testing mode using process player, to see if the designed process meets their needs. This does not mean that IT has no role in this process. In fact, Oracle BPM Suite has made it very easy for Business and IT to collaborate. The same process can be shared among business, and IT stakeholders and each can collaborate to create model-driven, process based executable applications. A process may need to integrate with multiple systems via various mechanisms, and IT leads system and data integration effort. IT helps fine tune the performance of process applications and ensures that the deployment of process application meets scalability and failover standards. In this session, we saw Harish Gaur and Satya Narayanan from Oracle demonstrate roles Business and IT play in BPM projects and how Oracle BPM Suite enables business and IT collaboration to design and automate process based applications. They also discussed real life customer stories. Some key takeaways from this session: There are no IT projects, only business initiatives, requiring IT support Identify high impact processes – critical, better BPM ROI Identify key metrics to measure process performance Align business with IT layer

    Read the article

  • backface culling error (in world space)

    - by acrilige
    I write simple software renderer. In my pipeline i have stage of backface culling. But looks like it has some error (see picture). I perform culling right after world transformation (is it correct?). (i can't insert picture in post coz i don't have enough points, so i just upload it (cube model): http://imageshack.us/photo/my-images/705/bcerror.png/) Vector3F view_dir(0.0f, 0.0f, 1.0f); std::vector<Triangle> to_remove; for (Triangle &t : m_triangles) { Vector4F e1 = t.v2 - t.v1; Vector4F e2 = t.v3 - t.v1; Vector3F normal( e1.y * e2.z - e1.z * e2.y, e1.z * e2.x - e1.x * e2.z, e1.x * e2.y - e1.y * e2.x ); normal.Normalize(); float dot = Dot(view_dir, normal); if (dot <= 0) to_remove.push_back(t); } for (Triangle& t : to_remove) m_triangles.erase(std::remove(m_triangles.begin(), m_triangles.end(), t), m_triangles.end()); Camera sits in origin and points in screen (RH). What is the reason? For better explanation i upload picture with cube rotation screenshots: http://imageshack.us/photo/my-images/842/bcmove.png/ UPDATED: The error occurs only when triangle has non-zero offset from origin UPDATED 2: If i process backface culling in clip space (after transforming all vertices with view and projection matrix), and just check z coordinate of triangle normal - it works perfect... Can i perform culing RIGHT BEFORE view/proj transforms? In this case looks like culling will not depends of projection and it's not right?.. UPDATED 3: I found answer and will post it in two hours - again coz of reputation lack.

    Read the article

  • Generated 3d tree meshes

    - by Jari Komppa
    I did not find a question on these lines yet, correct me if I'm wrong. Trees (and fauna in general) are common in games. Due to their nature, they are a good candidate for procedural generation. There's SpeedTree, of course, if you can afford it; as far as I can tell, it doesn't provide the possibility of generating your tree meshes at runtime. Then there's SnappyTree, an online webgl based tree generator based on the proctree.js which is some ~500 lines of javascript. One could use either of above (or some other tree generator I haven't stumbled upon) to create a few dozen tree meshes beforehand - or model them from scratch in a 3d modeller - and then randomly mirror/scale them for a few more variants.. But I'd rather have a free, linkable tree mesh generator. Possible solutions: Port proctree.js to c++ and deal with the open source license (doesn't seem to be gpl, so could be doable; the author may also be willing to co-operate to make the license even more free). Roll my own based on L-systems. Don't bother, just use offline generated trees. Use some other method I haven't found yet.

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >