Search Results

Search found 4671 results on 187 pages for 'universal binary'.

Page 119/187 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Building a Data Mart with Pentaho Data Integration Video Review by Diethard Steiner, Packt Publishing

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/01/building-a-data-mart-with-pentaho-data-integration-video-review-again.aspx The Building a Data Mart with Pentaho Data Integration Video by Diethard Steiner from Packt Publishing is more than just a course on how to use Pentaho Data Integration, it also implements and uses the principals of the Data Warehousing (and I even heard the name of Ralph Kimball in the video). Indeed, a video watcher should be familiar with its concepts as the Star Schema, Slowly Changing Dimension types, etc. so I suggest prior to watching this course to consider skimming through the Data Warehouse concepts (if unfamiliar) or even better, read the excellent Ralph’s The Data Warehouse Tooolkit. By the way, the author expands beyond using Pentaho along to MySQL and MonetDB which is a real icing on the cake! Indeed, I even suggest the name of the course should be ‘Building a Data Warehouse with Pentaho’. To successfully complete the course one needs to know some Linux (Ubuntu used in the course), the VI editor and the Bash command shell, but it seems that similar requirements would also apply to the Windows OS. Additionally, knowing some basic SQL would not hurt. As I had said, MonetDB is used in this course several times which seems to be not anymore complex than say MySQL, but based on what I read is very well suited for fast querying big volumes of data thanks to having a columnstore (vertical data storage). I don’t see what else can be a barrier, the material is very digestible. On this note, I must add that the author does not cover how to acquire the software, so here is what I found may help: Pentaho: the free Community Edition must be more than anyone needs to learn it. Or even go into a POC. MonetDB can be downloaded (exists for both, Linux and Windows) from http://goo.gl/FYxMy0 (just see the appropriate link on the left). The author seems to be using Eclipse to run SQL code, one can get it from http://goo.gl/5CcuN. To create, or edit database entities and/or schema otherwise one can use a universal tool called SQuirreL, get it from http://squirrel-sql.sourceforge.net.   Next, I must confess Diethard is very knowledgeable in what he does and beyond. However, there will be some accent heard to the user of the course especially if one’s mother tongue language is English, but it I got over it in a few chapters. I liked the rate at which the material is being presented, it makes me feel I paid for every second Eventually, my impressions are: Pentaho is an awesome ETL offering, it is worth learning it very much (I am an ETL fan and a heavy user of SSIS) MonetDB is nice, it tickles my fancy to know it more Data Warehousing, despite all the BigData tool offerings (Hive, Scoop, Pig on Hadoop), using the traditional tools still rocks Chapters 2 to 6 were the most fun to me with chapter 8 being the most difficult.   In terms of closing, I highly recommend this video to anyone who needs to grasp Pentaho concepts quick, likewise, the course is very well suited for any developer on a “supposed to be done yesterday” type of a project. It is for a beginner to intermediate level ETL/DW developer. But one would need to learn more on Data Warehousing and Pentaho, for such I recommend the 5 star Pentaho Data Integration 4 Cookbook. Enjoy it! Disclaimer: I received this video from the publisher for the purpose of a public review.

    Read the article

  • How to create a PPA for C++ program?

    - by piotr
    My questions are: c++/gtkmm project created with NetBeans. How to make package to PPA from this? I have created target files structure (*.desktop, iconfile, ui glade files). Binary goes to /opt/extras.ubuntu.com/myagenda/bin/myagenda. There is also a folder of glade files, that must go to /opt/extras.ubuntu.com/myagenda/bin/myagenda/ui. Desktop file goes to /usr/share/applications/myagenda.desktop. Icon goes to /usr/share/icons/hicolor/scalable/apps/myagenda.svg As you see, there is really small amount of files. Now, how to manage all this stuff, to create package on PPA, which knows where and how put this files to their targets? +-- opt ¦   +-- extras.ubuntu.com ¦   +-- myagenda ¦   +-- bin ¦   ¦   +-- myagenda ¦   +-- ui ¦   +-- item_btn_delete.png ¦   +-- item_btn_edit.png ¦   +-- myagenda.png ¦   +-- myagenda.svg ¦   +-- reminder.png ¦   +-- ui.glade +-- usr +-- share +-- applications ¦   +-- myagenda.desktop +-- icons +-- hicolor +-- scalable +-- apps +-- myagenda.svg Update: Created install file in debian directory with targets: data/myagenda /opt/extras.ubuntu/com/myagenda/bin data/ui/* /opt/extras.ubuntu/com/myagenda/ui data/myagenda.desktop /usr/share/applications data/myagenda.svg /usr/share/icons/hicolor/scalable/apps After dpkg-buildpackage it builds, but for amd64 architecture. Now, trying to change that to i386.

    Read the article

  • Project structure: where to put business logic

    - by Mister Smith
    First of all, I'm not asking where does business logic belong. This has been asked before and most answers I've read agree in that it belongs in the model: Where to put business logic in MVC design? How much business logic should be allowed to exist in the controller layer? How accurate is "Business logic should be in a service, not in a model"? Why put the business logic in the model? What happens when I have multiple types of storage? However people disagree in the way this logic should be distributed across classes. There seem to exist three major currents of thought: Fat model with business logic inside entity classes. Anemic model and business logic in "Service" classes. It depends. I find all of them problematic. The first option is what most Fowlerites stick to. The problem with a fat model is that sometimes a business logic funtion is not only related to a class, and instead uses a bunch of other classes. If, for example, we are developing a web store, there should be a function that calcs an order's total. We could think of putting this function inside the Order class, but what actually happens is that the logic needs to use different classes, not only data contained in the Order class, but also in the User class, the Session class, and maybe the Tax class, Country class, or Giftcard, Payment, etc. Some of these classes could be composed inside the Order class, but some others not. Sorry if the example is not very good, but I hope you understand what I mean. Putting such a function inside the Order class would break the single responsibility principle, adding unnecesary dependences. The business logic would be scattered across entity classes, making it hard to find. The second option is the one I usually follow, but after many projects I'm still in doubt about how to name the class or classes holding the business logic. In my company we usually develop apps with offline capabilities. The user is able to perform entire transactions offline, so all validation and business rules should be implemented in the client, and then there's usually a background thread that syncs with the server. So we usually have the following classes/packages in every project: Data model (DTOs) Data Access Layer (Persistence) Web Services layer (Usually one class per WS, and one method per WS method). Now for the business logic, what is the standard approach? A single class holding all the logic? Multiple classes? (if so, what criteria is used to distribute the logic across them?). And how should we name them? FooManager? FooService? (I know the last one is common, but in our case it is bad naming because the WS layer usually has classes named FooWebService). The third option is probably the right one, but it is also devoid of any useful info. To sum up: I don't like the first approach, but I accept that I might have been unable to fully understand the Zen of it. So if you advocate for fat models as the only and universal solution you are welcome to post links explaining how to do it the right way. I'd like to know what is the standard design and naming conventions for the second approach in OO languages. Class names and package structure, in particular. It would also be helpful too if you could include links to Open Source projects showing how it is done. Thanks in advance.

    Read the article

  • UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD)

    - by swafbrother
    UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD) Hello, I'm having trouble setting up a dual-boot (Ubuntu 12.04 LTS and Windows 8.1) in my ASUS K55VM laptop's hard drive disk (500 GB). I was mostly following tutorials for doing this, but at some point something has gone wrong. Up to now, I have followed these steps: I formatted my HDD into GPT. I clean-installed Windows 8.1. I didn't prevent Windows from choosing the partitions to use and it created these partitions: A Recovery partition (sda1). An EFI System Partition (sda2). A Microsoft Reserved Partition (sda3). A Windows Data Partition or C drive (sda4). I reduced the Windows Data Partition via Windows' Disk Management. I made a bootable USB Stick with Ubuntu 12.04 LTS from ISO, using Universal USB Installer. I created these partitions for Ubuntu: A Boot partition, mounted at /boot (sda5). A Root partition, mounted at / (sda6). A Swap partition (sda7). In Device for boot loader installation I chose: /dev/sda. Then, when I rebooted, it went straight into Ubuntu. So I installed Boot-Repair, and clicked on Recommended Repair. It automatically did its job without asking for anything. I rebooted and Grub showed up, with a lot of options. At this point I had a decent dual-boot setup; Ubuntu and both Windows entries worked fine: Ubuntu. Windows Boot UEFI Loader. Windows UEFI bkpbootmgfw.efi. I executed this command: sudo grub-install --force /dev/sda5. Then I tried to make Windows 8.1's Boot Manager the main boot manager, so that I could choose which OS to boot into from a menu. I downloaded EasyBCD on Windows. It showed 2 Ubuntu entries and 1 Windows entry. I went into BCD Deployment tab and clicked on Write MBR. At this point, I went into BIOS and made Windows Boot Manager the first boot option. When I rebooted, I got a black screen with the message efidisk read error, and then (I guess) it switched to the next boot option, which is Ubuntu, resulting in Grub showing up. From Grub, Ubuntu entry is working and so are both Windows entries. If I choose Ubuntu, it normally boots into Ubuntu. But if I choose Windows, it goes into Windows' boot manager. In Windows' boot manager, a menu shows up: Ubuntu. Ubuntu. Windows 8.1. If I choose Windows, it boots into Windows without any problem. If I choose Ubuntu, it boots into Grub (back to step 14). Here's my BootInfo Summary: http://paste.ubuntu.com/6698171/ Windows Boot Manager is clearly not working as expected; I can't directly boot into it and I can't boot into it from BIOS either (efidisk read error again). If I want to boot into Windows I need to boot into Grub first, which is the opposite of what I wanted. I need help at this point. What is the best thing I can do? Is there a more reliable and/or simpler way of acomplishing a satisfying dual-boot for this situation? Can someone provide a way for going back to step 8, where I had a more efficient dual-boot setup? If only I could undo what I did with Easy BCD and skip Windows' Boot Menu... Can someone provide a way to fix this mess? Thanks in advance and sorry for the length of this, I wanted to be exhaustive.

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • CodePlex Daily Summary for Friday, November 01, 2013

    CodePlex Daily Summary for Friday, November 01, 2013Popular ReleasesFamily Tree Analyzer: Version 3.0.3.0-beta test3: Count Isle of Man as England & Wales for Census Lost Cousins text was preventing drag n drop of files Lost Cousins no Countries form wont open more than one copy Changed sort order of Lost Cousins buttons to match report which matches Lost Cousins website UK Census date verification is only applied to UK facts Counts duplicate Lost Cousins facts Added report for Lost Cousins facts but no census fact Added Census Duplicate and Census Missing Location reports BEF & AFT residence facts were coun...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 Drag & Drop support for rows Support for 11 new datatypes (to a total of 20 datatypes): Color Picker Dropdown Checklist Enum Checboxlist Enum Dropdownl...Local History for Visual Studio: Local History - 1.0: New - Support for Visual Studio 2013SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...Community Forums NNTP bridge: Community Forums NNTP Bridge V54 (LiveConnect): This is the first release which can be used with the new LiveConnect authentication. Fixes the problem that the authentication will not work after 1 hour. Also a logfile will now be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt".WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)Community TFS Build Extensions: October 2013: The October 2013 release contains Scripts - a new addition to our delivery. These are a growing library of PowerShell scripts to use with VS2013. See our documentation for more on scripting. VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 Community TFS Build Manager VS2013 The Community TFS Build Managers for VS2010, 2012 and 2013 can also be found in the Visual Studio Gallery where upda...SuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.Collections2: Collections2 v.1.1: Collection2 v.1.1 is the 1st developer and user release of the Collections2 library. The library contains 3 classes TwoWayDictionary<S,T>, TwoWayDictionary<T>, and TwoWayDictException. Binaries are available for .NET 2.0-3.5 (Binary Collections2NET2.0 v.1.1.1.1) and .NET 4.0 (Binary Collections2NET4 v.1.1.1.1). Also available are a source code zip file (Source Collections2NET4 v.1.1.1.1.zip) containing a solution file, and, example console and library project files for Visual Studio 2010. ...Mugen MVVM Toolkit: Mugen MVVM Toolkit 2.1: v 2.1 Added the 'Should' class instead of the 'Validate' class. The 'Validate' class is now obsolete. Added 'Toolkit.Annotations' to support the Mugen MVVM Toolkit ReSharper plugin. Updated JetBrains annotations within the project. Added the 'GlobalSettings.DefaultActivationPolicy' property to represent the default activation policy. Removed the 'GetSettings' method from the 'ViewModelBase' class. Instead of it, the 'GlobalSettings.DefaultViewModelSettings' property is used. Updated...NAudio: NAudio 1.7: full release notes available at http://mark-dot-net.blogspot.co.uk/2013/10/naudio-17-release-notes.htmlDirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...ExtJS based ASP.NET Controls: FineUI v4.0beta1: +2013-10-28 v4.0 beta1 +?????Collapsed???????????????。 -????:window/group_panel.aspx??,???????,???????,?????????。 +??????SelectedNodeIDArray???????????????。 -????:tree/checkbox/tree_checkall.aspx??,?????,?????,????????????。 -??TimerPicker???????(????、????ing)。 -??????????????????????(???)。 -?????????????,??type=text/css(??~`)。 -MsgTarget???MessageTarget,???None。 -FormOffsetRight?????20px??5px。 -?Web.config?PageManager??FormLabelAlign???。 -ToolbarPosition??Left/Right。 -??Web.conf...CODE Framework: 4.0.31028.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.VidCoder: 1.5.10 Beta: Broke out all the encoder-specific passthrough options into their own dropdown. This should make what they do a bit more clear and clean up the codec list a bit. Updated HandBrake core to SVN 5855.Indent Guides for Visual Studio: Indent Guides v14: ImportantThis release has a separate download for Visual Studio 2010. The first link is for VS 2012 and later. Version History Changed in v14 Improved performance when scrolling and editing Fixed potential crash when Resharper is installed Fixed highlight of guides split around pragmas in C++/C# Restored VS 2010 support as a separate download Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.3 (mvc5): version 3.5.3 - support for mvc5 version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 ========================== - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js...Media Companion: Media Companion MC3.585b: IMDB plot scraping Fixed. New* Movie - Rename Folder using Movie Set, option to move ignored articles to end of Movie Set, only for folder renaming. Fixed* Media Companion - Fixed if using profiles, config files would blown up in size due to some settings duplicating. * Ignore Article of An was cutting of last character of movie title. * If Rescraping title, sort title changed depending on 'Move article to end of Sort Title' setting. * Movie - If changing Poster source order, list would beco...Custom Controls for TFS Work Item Tracking: 1.2.0.0: This is a maintenance release that adds support for TFS 2013. The control pack contains the following work item custom controls: MultiValueControl: a ComboBox to accept and show multiple values for a field by showing a list of checkboxes. More details here: MultiValue Control ScreenshotControl: a simple control (button) to capture a screenshot as a work item attachment. More details here: Screenshot controls. AttachmentsControl: this control cab be used as an alternative to the standar...New Projects3DChemFold: TODOAcc Oct2013 Liq2 MVC3EF5: Proyecto hecho en mvc3 con Entity Framework 5 para liquidación de sueldosAssociated List Box Control: A server side control combining list boxes and buttons to move items between lists. This has custom list boxes developed from a user control in ASP.Net website.AutoSPCUInstaller: Install the SharePoint Cumulative Update without pain. Use the AutoSPCUInstaller for a clean an easy installation process.Catalano Secure Delete: Delete your privacy data permanently.Duplica: Duplica is console (Command Prompt) tool aimed to duplicate existing file into a desired destination.High performance xll add-ins for Excel: A modern C++ interface to the Microsoft Excel SDK.Home Server SMART Classic: Home Server SMART is a hard disk and SSD health monitoring, reporting and alerting tool for Windows Home Server.Message Locked Encryption Library for .NET: A MLE (Message Locked Encryption) library developed for the .NET FrameworkMuServer: Web server mainly for streaming media to web browsing capable devices.MVA: Code examples from the book Programming in C# Exam Ref 70-483.NETCommon: This is only common library for NETonioncry: onion makes the aliens cry :(sfast: cms、??????、??????SPWrap - TileBoard Webpart: SPWrap Suite: Contains web parts that are wrapping existing libraries and features into reusable & fully configurable SharePoint components.TFS Build Launcher: A single command line utility to launch your TFS builds, allowing you to pass values into build process parameters.UsefulTools: A collection of useful extension methods and utility classes for .NET in general and WPFVirtual Book Libreria Online: Proyecto Libreria entity frame 5.0XML File Editor: XML Editor

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 1

    - by Your DisplayName here!
    As mentioned in my last post, I made several additions to WIF’s built-in authorization infrastructure to make it more flexible and easy to use. The foundation for all this work is that you have to be able to directly call the registered ClaimsAuthorizationManager. The following snippet is the universal way to get to the WIF configuration that is currently in effect: public static ServiceConfiguration ServiceConfiguration {     get     {         if (OperationContext.Current == null)         {             // no WCF             return FederatedAuthentication.ServiceConfiguration;         }         // search message property         if (OperationContext.Current.IncomingMessageProperties. ContainsKey("ServiceConfiguration"))         {             var configuration = OperationContext.Current. IncomingMessageProperties["ServiceConfiguration"] as ServiceConfiguration;             if (configuration != null)             {                 return configuration;             }         }         // return configuration from configuration file         return new ServiceConfiguration();     } }   From here you can grab ServiceConfiguration.ClaimsAuthoriationManager which give you direct access to the CheckAccess method (and thus control over claim types and values). I then created the following wrapper methods: public static bool CheckAccess(string resource, string action) {     return CheckAccess(resource, action, Thread.CurrentPrincipal as IClaimsPrincipal); } public static bool CheckAccess(string resource, string action, IClaimsPrincipal principal) {     var context = new AuthorizationContext(principal, resource, action);     return AuthorizationManager.CheckAccess(context); } public static bool CheckAccess(Collection<Claim> actions, Collection<Claim> resources) {     return CheckAccess(new AuthorizationContext(         Thread.CurrentPrincipal.AsClaimsPrincipal(), resources, actions)); } public static bool CheckAccess(AuthorizationContext context) {     return AuthorizationManager.CheckAccess(context); } I also created the same set of methods but called DemandAccess. They internally use CheckAccess and will throw a SecurityException when false is returned. All the code is part of Thinktecture.IdentityModel on Codeplex – or via NuGet (Install-Package Thinktecture.IdentityModel).

    Read the article

  • How to create (via installer script) a task that will install my bash script so it runs on DE startup?

    - by MountainX
    I've been reading for the last couple hours about Upstart, .xinitrc, .xsessions, rc.local, /etc/init.d/, /etc/xdg/autostart, @reboot in crontab and so many other things that I'm totally confused! Here is my bash script. It should start/run after the desktop environment is started and it should continue to run at all times until logout/shutdown. It should start again on reboot. Any time the DE is running, it should run. #!/bin/bash while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt kdialog --title 'The software has been updated' --msgbox "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0 I know zero about using Upstart, but I understand that Upstart is one way to handle this. I'll consider other approaches but most of the things I've been reading about are too complex for me. Furthermore, I can't figure out which approach will meet my requirements (which I'll detail below). There are two steps in my question: How to automatically start the script above, as described above. How to "install" that Upstart task via a bash script (i.e., my "installer"). I assume (or hope) that step 2 is almost trivial once I understand step 1. I have to support all flavors of Ubuntu desktops. Therefore, the kdialog call above will be replaced. I'm considering easybashgui for this. (Or I could use zenity on gnome DE's.) My requirements are: The setup process (installation) must be done via a bash script. I cannot use the GUI method described in the Ubuntu doc AddingProgramToSessionStartup, for example. I must be able to script/automate the setup (installing) process using bash. Currently, it is as simple as having the bash installer script copy the above script into /home/$USER/.kde/Autostart/ The setup process must be universal across Ubuntu derivatives including Unity and KDE and gnome desktops. The same setup script (installer) should run on Linux Mint, Kubuntu, Xbuntu (basically any flavor of Ubuntu and major derivatives such as Linux Mint). For example, we cannot continue to put a script file in /home/$USER/.kde/Autostart/ because that exists only on KDE. The above script should work for each of the limited flavors we use. Hence our interest in using easybashgui instead of kdialog or zenity. See below. The installed monitoring script should only be started after the desktop is started since it will display a GUI message to the user if the update is found. The monitoring script (above) should run without root privileges, of course. But the installer (bash script) can be run as root. I'm not a real developer or a sysadmin. This is a part time volunteer thing for me, so it needs to be easy/simple. I can write bash scripts and I can program a little, but I know nothing about Upstart or systemd, for example. And, unfortunately, my job doesn't give me time to become an expert on init systems or much of anything else related to development and sysadmin. So I have to stick with simple solutions. The easybashgui version of the script might look like this: #!/bin/bash source easybashgui while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt message "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0

    Read the article

  • MIX 2010 Covert Operations Day 4

    - by GeekAgilistMercenary
    The Microsoft Azure Cloud is looking pretty solid compared to just a few months ago.  The storage mechanisms in the cloud now are blobs, drives, tables, and queues.  Also, not to forget, is SQL Azure.  I won’t dive too much into that, as most will know what SQL Server is, and SQL Azure is pretty much just a hosted SQL Server instance. The blobs are generally geared toward holding binary type data, images and those types of things.  The tables are huge key value type stores.  The drives are VHD, which are virtual hard drives.  The queues are just queues used for workflow and also to store messages back and forth in a queue. These methods are accessible via REST, which makes application development against the storage services extremely easy.  This is a big plus point as REST services are a preferred way to connect and interact with data storage.  It also sets up Silverlight as a prime framework to utilize services. Day 4 I pretty much dedicated to reviewing these cloud services and finishing up work related development.  With that, I'm wrapping up my MIX 2010 blog coverage.  Now back to your regularly scheduled programming. Original entry.

    Read the article

  • #altnetseattle &ndash; REST Services

    - by GeekAgilistMercenary
    Below are the notes I made in the REST Architecture Session I helped kick off with Andrew. RSS, ATOM, and such needed for better discovery.  i.e. there still is a need for some type of discovery. Difficult is modeling behaviors in a RESTful way.  ??  Invoking some type of state against an object.  For instance in the case of a POST vs. a GET.  The GET is easy, comes back as is, but what about a POST, which often changes some state or something. Challenge is doing multiple workflows with stateful workflows.  How does batch work.  Maybe model the batch as a resource. Frameworks aren’t particularly part of REST, REST is REST.  But point argued that REST is modeled, or part of modeling a state machine of some sort… ? Nothing is 100% reliable w/ REST – comparisons drawn with TCP/IP.  Sufficient probability is made however for the communications, but the idea of a possible failure has to be built into the usage model of REST. Ruby on Rails / RESTfully, and others used.  What were their issues, what do they do.  ATOM feeds, object serialized, using LINQ to XML w/ this.  No state machine libraries. Idempotent areas around REST and single change POST changes are inherent in the architecture. REST – one of the constrained languages is for the interaction w/ the system.  Limiting what can be done on the resources.  - disagreement, there is no agreed upon REST verbs. Sam Ruby – RESTful services.  Expanded the verbs within REST/HTTP pushes you off the web.  Of the existing verbs POST leaves the most up for debate. Robert Reem used Factory to deal with the POST to handle the new state.  The POST identifying what it just did by the return. Different states are put into POST, so that new prospective verbs, without creating verbs for REST/HTTP can be used to advantage without breaking universal clients. Biggest issue with REST services is their lack of state, yet it is also one of their biggest strengths.  What happens is that the client takes up the often onerous task of handling all state, state machines, and other extraneous resource management.  All the GETs, POSTs, DELETEs, INSERTs get all pushed into abstraction.  My 2 cents is that this in a way ends up pushing a huge proprietary burden onto the REST services often removing the point of REST to be simple and to the point. WADL does provide discovery and some state control (sort of?) Statement made, "WADL" isn't needed.  The JSON, XML, or other client side returned data handles this. I then applied the law of 2 feet rule for myself and headed to finish up these notes, post to the Wiki, and figure out what I was going to do next.  For the original Wiki entry check it out here. I will be adding more to this post with a subsequent post.  Please do feel free to post your thoughts and ideas about this, as I am sure everyone in the session will have more for elaboration.

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • Java Spotlight Episode 89: Geoff Morton on Java Embedded

    - by Roger Brinkley
    Interview with Geoff Morton, Group Vice President, Worldwide Java Sales at Oracle , on Java embedded. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News EclipseLink 2.4 Hands-on FREE GlassFish Course NetBeans IDE 7.2 RC1 Hamish Morrison: OpenJDK Haiku port: quarter term report Proposed Update to the OpenJDK Web Site Terms of Use JavaOne Embedded Oracle Java ME Embedded Client (OJEC) 1.1 release on OTN New Videos Understanding the JVM and Low Latency Applications 55 New Things in Java 7 - Concurrency Events July 5, Java Forum, Stuttgart, Germany Jul 12, Java EE 6 workshop at Mindtree, Bangalore Jul 13-14, IndicThreads, Delhi July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewGeoff Morton is the Group Vice President, Worldwide Java Sales at Oracle. Mail Bag What’s Cool Duke’s Choice Awards decision is going on Java Champions Facebook Page Joe Darcy: Moving monarchs and dragons: migrating the JDK bugs to JIRA Mike Duigou: Updated Lambda Binary Drops Mark Reinhold: Mercurial "jcheck" extension now available

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • Retrieve the full ASP.NET Form Buffer as a String

    - by Rick Strahl
    Did it again today: For logging purposes I needed to capture the full Request.Form data as a string and while it’s pretty easy to retrieve the buffer it always takes me a few minutes to remember how to do it. So I finally wrote a small helper function to accomplish this since this comes up rather frequently especially in debugging scenarios or in the immediate window. Here’s the quick function to get the form buffer as string: /// <summary> /// Returns the content of the POST buffer as string /// </summary> /// <returns></returns> public static string FormBufferToString() { HttpRequest Request = HttpContext.Current.Request; if (Request.TotalBytes > 0) return Encoding.Default.GetString(Request.BinaryRead(Request.TotalBytes)); return string.Empty; } Clearly a simple task, but handy to have in your library for reuse. You probably don’t want to call this if you have a massive inbound form buffer, or if the data you’re retrieving is binary. It’s probably a good idea to check the inbound content type before calling this function with something like this: var formBuffer = string.Empty; if (Request.ContentType.StartsWith("text/") || Request.ContentType == "application/x-www-form-urlencoded") ) { formBuffer = FormBufferToString(); } to ensure you’re working only on content types you can actually view as text. Now if I can only remember the name of this function in my library – it’s part of the static WebUtils class in the West Wind Web Toolkit if you want to check out a number of other useful Web helper functions.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  

    Read the article

  • Style bits vs. Separate bool's

    - by peterchen
    My main platform (WinAPI) still heavily uses bits for control styles etc. (example). When introducing custom controls, I'm permanently wondering whether to follow that style or rather use individual bool's. Let's pit them against each other: enum EMyCtrlStyles { mcsUseFileIcon = 1, mcsTruncateFileName = 2, mcsUseShellContextMenu = 4, }; void SetStyle(DWORD mcsStyle); void ModifyStyle(DWORD mcsRemove, DWORD mcsAdd); DWORD GetStyle() const; ... ctrl.SetStyle(mcsUseFileIcon | mcsUseShellContextMenu); vs. CMyCtrl & SetUseFileIcon(bool enable = true); bool GetUseFileIcon() const; CMyCtrl & SetTruncteFileName(bool enable = true); bool GetTruncteFileName() const; CMyCtrl & SetUseShellContextMenu(bool enable = true); bool GetUseShellContextMenu() const; ctrl.SetUseFileIcon().SetUseShellContextMenu(); As I see it, Pro Style Bits Consistent with platform less library code (without gaining complexity), less places to modify for adding a new style less caller code (without losing notable readability) easier to use in some scenarios (e.g. remembering / transferring settings) Binary API remains stable if new style bits are introduced Now, the first and the last are minor in most cases. Pro Individual booleans Intellisense and refactoring tools reduce the "less typing" effort Single Purpose Entities more literate code (as in "flows more like a sentence") No change of paradim for non-bool properties These sound more modern, but also "soft" advantages. I must admit the "platform consistency" is much more enticing than I could justify, the less code without losing much quality is a nice bonus. 1. What do you prefer? Subjectively, for writing the library, or for writing client code? 2. Any (semi-) objective statements, studies, etc.?

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • How Do I Search For Struct Items In A Vector? [migrated]

    - by Vladimir Marenus
    I'm attempting to create an inventory system using a vector implementation, but I seem to be having some troubles. I'm running into issues using a struct I made. NOTE: This isn't actually in a game code, this is a separate Solution I am using to test my knowledge of vectors and structs! struct aItem { string itemName; int damage; }; int main() { aItem healingPotion; healingPotion.itemName = "Healing Potion"; healingPotion.damage= 6; aItem fireballPotion; fireballPotion.itemName = "Potion of Fiery Balls"; fireballPotion.damage = -2; vector<aItem> inventory; inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(healingPotion); inventory.push_back(fireballPotion); if(find(inventory.begin(), inventory.end(), fireballPotion) != inventory.end()) { cout << "Found"; } system("PAUSE"); return 0; } The preceeding code gives me the following error: 1c:\program files (x86)\microsoft visual studio 11.0\vc\include\xutility(3186): error C2678: binary '==' : no operator found which takes a left-hand operand of type 'aItem' (or there is no acceptable conversion) There is more to the error, if you need it please let me know. I bet it's something small and silly, but I've been thumping at it for over two hours. Thanks in advance!

    Read the article

  • CodePlex Daily Summary for Saturday, October 20, 2012

    CodePlex Daily Summary for Saturday, October 20, 2012Popular ReleasesP1 Port monitoring with Netduino Plus: V0.3 New release with new features: This V0.3 release that is made public on the 20th of October 2012 Third public version V0.3 A lot of work in better code, some parts are documented in the code S0 Pulse counter logic added for logging use or production of electricity Send data to PV Output for production of electricity Extra fields for COSM.com for more information Ability to enable or disable certain functionality before deploying to Netduino PlusMCEBuddy 2.x: MCEBuddy 2.3.3: 1. MCEBuddy now supports PIPE (2.2.15 style) and the newer remote TCP communication. This is to solve problems with faulty Ceton network drivers and some issues with older system related to load. When using LOCALHOST, MCEBuddy uses PIPE communication otherwise it uses TCP based communication. 2. UPnP is now disabled by Default since it interferes with some TV Tuner cards (CETON) that represent themselves as Network devices (bad drivers). Also as a security measure to avoid external connection...Pulse: Pulse 0.6.2.0: This is a roll-up of a bunch of features I've been working on for a while. I wasn't planning to release it but Wallbase.cc is broken in any version earlier then this! - Fixed Wallbase.cc provider - Multi-provider options. Now you can specify multiple input providers with different search options. - Providers have little icons now - More! Check it out and report any bugs! Having issues? Check the Known Errors page for solutions to commonly encountered problems. If you don't see the ...Orchard Project: Orchard 1.6 RC: RELEASE NOTES This is the Release Candidate version of Orchard 1.6. You should use this version to prepare your current developments to the upcoming final release, and report problems. Please read our release notes for Orchard 1.6 RC: http://docs.orchardproject.net/Documentation/Orchard-1-6-Release-Notes Please do not post questions as reviews. Questions should be posted in the Discussions tab, where they will usually get promptly responded to. If you post a question as a review, you wil...Rawr: Rawr 5.0.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Yahoo! UI Library: YUI Compressor for .Net: Version 2.1.1.0 - Sartha (BugFix): - Revered back the embedding of the 2x assemblies.Visual Studio Team Foundation Server Branching and Merging Guide: v2.1 - Visual Studio 2012: Welcome to the Branching and Merging Guide What is new? The Version Control specific discussions have been moved from the Branching and Merging Guide to the new Advanced Version Control Guide. The Branching and Merging Guide and the Advanced Version Control Guide have been ported to the new document style. See http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/10/17/alm-rangers-raising-the-quality-bar-for-documentation-part-2.aspx for more information. Quality-Bar Details Documentatio...D3 Loot Tracker: 1.5.5: Compatible with 1.05.Common Data Parameters Module: CommonParam0.3H: Common Param has now been updated for VS2012 and NUnit 2.6.1. Conformance with the latest StyleCop has been maintained. If you need a version for VS2010, please use the previous version.????: ???_V2.0.0: ?????????????Write Once, Play Everywhere: MonoGame 3.0 (BETA): This is a beta release of the up coming MonoGame 3.0. It contains an Installer which will install a binary release of MonoGame on windows boxes with the following platforms. Windows, Linux, Android and Windows 8. If you need to build for iOS or Mac you will need to get the source code at this time as the installers for those platforms are not available yet. The installer will also install a bunch of Project templates for Visual Studio 2010 , 2012 and MonoDevleop. For those of you wish...WPUtils: WPUtils 1.3: Blend SDK for Silverlight provides a HyperlinkAction which is missing in Blend SDK for Windows Phone. This release adds such an action which makes use of WebBrowserTask to show web page. You can also bind the hyperlink to your view model. NOTE: Windows Phone SDK 7.1 or higher is required.Windawesome: Windawesome v1.4.1 x64: Fixed switching of applications across monitors Changed window flashing API (fix your config files) Added NetworkMonitorWidget (thanks to weiwen) Any issues/recommendations/requests for future versions? This is the 64-bit version of the release. Be sure to use that if you are on a 64-bit Windows. Works with "Required DLLs v3".CODE Framework: 4.0.21017.0: See change log in the Documentation section for details.Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Add support for .net 4.0 to Magelia.Webstore.Client and StarterSite version 2.1.254.3 Scheduler Import & Export feature (for Professional and Entreprise Editions) UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring release of a nugget package to help developers speed up development http://nuget.org/packages/Magelia.Webstore.Client optimization of the data update mechanism (a.k.a. "burst") Performance improvment of the d...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.New ProjectsADFS 2.0 Attribute Store for SharePoint: ADFS 2.0 Attribute Store for SharePointALM Kickstarter: A simple Application Lifecycle Management Kickstart application written using ASP.NET MVC.Building iOS Apps with Team Foundation Server: An iPad sample project used for building iOS Xcode projects using a Team Foundation Server and the open source Jenkins build system. gyk-note: WTFHello World Fkollike: Test Project for fkollikeInnovacall ASP.net MVC 4 Azure Framework: Windows Azure Version of Innocacall ASP.net MVC 4 Framework. Intercom.Net: Intercom.Net is a C# library that provides easy tracking of customers to integration with the intercom.io CRM application. No need to learn yet another trackingjosephproject1: creating my first projectJust4Test: The project is just created for test.KBCruiser: KBCruiser helps developer to search reference or answers according to different resource categories: Forum/Blog/KB/Engine/Code/Downloadkp: store usernames/passwords from the commandline. ** Semi-secure, but by no means robust enough for production usage ** Troy Hunt would be disgusted.Neuron Operating System: Neuron Operating Systemnewelook: onesearch Pedestrian Simulation: Pedestrian simulationPTE: Using template Excel to generate UI for Prototyping.RequestModel: RequestModel is a simple implementation of typed access to a ASP.NET web forms request parametersSap: Sap is free, open-source web software that you can use to create a blog. Sap is made in ASP.NET MVC 4 (Razor). Sap is still pre-alpha and heavily in developmentSgart SharePoint 2010 Query Viewer: Windows application, based on the client object model of SharePoint 2010, that allows you to see the CAML query of a list view. Silverlight SuperLauncher: The Silverlight SuperLauncher project base on SL8.SL an Micosoft NESL.SL8.SL: The SL8.SL is a set of extension for silverlight. include mvvm,data validate,linq to sql, custom data page,oob helper etc. SL8.SL make easier to develop sl appSPDeployRetract: Easily Deploy and Retract SharePoint solutions using PowerShell. System Center Service Manager API: This API is for System Center Service Manager. It is written in C# and abstracts the more difficult aspects of service manager customization.TestingConf Utilities: This Framework will be helpful for testing and configuration purpose, so it has some methods that will be used as utilities that developers may need.The Veteran's Image Memory: his Project is been built for Educational purposes only . its not our Intention to offend anyone.Viking Effect: Viking Effect is a Brazilian website for the nerd people. We will offer news, tales and the Viking Scream, our take on the Podcast.Weather Report Widget: WPF-based application for displaying temperature, wind direction and speed for the selected city.WorldsCricket: WorldsCricket is a website which gives information on Cricket history with different cricket format, World’s cricketers, International Cricket Teams etc...XendApp - API: XendApp is a eco system for sending messages from a computer/application to a device. A device could be a phone or tablet for example.

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • Fail to upgrade from 10.10 to 11.04

    - by Ana Solís
    I was using Natty for a while, but while updating to the new release there was a blackout and it wasn't able to finish and Ubuntu failed to load after that. I thought, no worries I got my files backed up and I still got my 10.10 CD I used to put Ubuntu in my computer in the first place. So I installed it again, with the plan of using the update manager to get myself with the current release... Except I get this error: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/natty/main/source/Sources.gz 404 Not Found , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/natty/main/binary-amd64/Packages.gz 404 Not Found , E:Some index files failed to download, they have been ignored, or old ones used instead. My internet connection is just fine, seeing as I'm able to post this, but I don't know what else to do. Tried to download Quantal in another computer and putting it on a DVD (since it won't fit in a CD...) and the stupid thing fails to load it, it skips it over and goes right back to Maverick... (not a faulty disk, it installed Ubuntu just fine in a friends computer...)

    Read the article

  • Sorting versus hashing

    - by Paul Siegel
    My problem is as follows. I have an array of n strings with m < n of them distinct. I want to create a one-to-one function which assigns each of the m distinct strings to the numbers 0 ... m-1. For example, if my strings are: Bob, Amy, Bob, Charlie, Amy then the function: Bob -> 0, Amy -> 1, Charlie -> 2 would meet my needs. I have thought of three possible approaches: Sort the list of strings, remove duplicates, and construct the function using a search algorithm. Create a hash table and check each string to see if it is already in the table before inserting it. Sort the list of strings, remove duplicates, and put the resulting list into a hash table. My code will be written in Java, and I will likely use standard Java algorithms: merge sort for sorting, binary search for searching, and whatever the standard Java hash table algorithm is. Question: Assume that after creating the function I will have to evaluate it on each of the n original strings. Which of the three approaches is fastest? Is there a better way? Part of the problem is that I don't really know what's going on "under the hood" in standard hashing algorithms. Any help would be appreciated.

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >