Search Results

Search found 27890 results on 1116 pages for 'oracle retail documentation team'.

Page 739/1116 | < Previous Page | 735 736 737 738 739 740 741 742 743 744 745 746  | Next Page >

  • going from closed-source to open-source [closed]

    - by mspoerr
    I am thinking of releasing the source code of my application (Freeware at the moment). It is written in C++ with VisualStudio 2008 and all used 3rd-party libs are free or open-source and platform independent. The idea to release the source-code is very old, but till now I did not want to show the code because I am not sure if it is nice/well designed (I am not a professional developer), but the application is growing and help would be very welcome, but I want to keep control... What do I need to consider? Is there any best practice for this scenario? The code itself is one thing, but there is much more like license, documentation, project settings, 3rd party libs, platform (Sourceforge, other?)

    Read the article

  • F# for the C# Programmer

    - by mbcrump
    Are you a C# Programmer and can’t make it past a day without seeing or hearing someone mention F#?  Today, I’m going to walk you through your first F# application and give you a brief introduction to the language. Sit back this will only take about 20 minutes. Introduction Microsoft's F# programming language is a functional language for the .NET framework that was originally developed at Microsoft Research Cambridge by Don Syme. In October 2007, the senior vice president of the developer division at Microsoft announced that F# was being officially productized to become a fully supported .NET language and professional developers were hired to create a team of around ten people to build the product version. In September 2008, Microsoft released the first Community Technology Preview (CTP), an official beta release, of the F# distribution . In December 2008, Microsoft announced that the success of this CTP had encouraged them to escalate F# and it is now will now be shipped as one of the core languages in Visual Studio 2010 , alongside C++, C# 4.0 and VB. The F# programming language incorporates many state-of-the-art features from programming language research and ossifies them in an industrial strength implementation that promises to revolutionize interactive, parallel and concurrent programming. Advantages of F# F# is the world's first language to combine all of the following features: Type inference: types are inferred by the compiler and generic definitions are created automatically. Algebraic data types: a succinct way to represent trees. Pattern matching: a comprehensible and efficient way to dissect data structures. Active patterns: pattern matching over foreign data structures. Interactive sessions: as easy to use as Python and Mathematica. High performance JIT compilation to native code: as fast as C#. Rich data structures: lists and arrays built into the language with syntactic support. Functional programming: first-class functions and tail calls. Expressive static type system: finds bugs during compilation and provides machine-verified documentation. Sequence expressions: interrogate huge data sets efficiently. Asynchronous workflows: syntactic support for monadic style concurrent programming with cancellations. Industrial-strength IDE support: multithreaded debugging, and graphical throwback of inferred types and documentation. Commerce friendly design and a viable commercial market. Lets try a short program in C# then F# to understand the differences. Using C#: Create a variable and output the value to the console window: Sample Program. using System;   namespace ConsoleApplication9 {     class Program     {         static void Main(string[] args)         {             var a = 2;             Console.WriteLine(a);             Console.ReadLine();         }     } } A breeze right? 14 Lines of code. We could have condensed it a bit by removing the “using” statment and tossing the namespace. But this is the typical C# program. Using F#: Create a variable and output the value to the console window: To start, open Visual Studio 2010 or Visual Studio 2008. Note: If using VS2008, then please download the SDK first before getting started. If you are using VS2010 then you are already setup and ready to go. So, click File-> New Project –> Other Languages –> Visual F# –> Windows –> F# Application. You will get the screen below. Go ahead and enter a name and click OK. Now, you will notice that the Solution Explorer contains the following: Double click the Program.fs and enter the following information. Hit F5 and it should run successfully. Sample Program. open System let a = 2        Console.WriteLine a As Shown below: Hmm, what? F# did the same thing in 3 lines of code. Show me the interactive evaluation that I keep hearing about. The F# development environment for Visual Studio 2010 provides two different modes of execution for F# code: Batch compilation to a .NET executable or DLL. (This was accomplished above). Interactive evaluation. (Demo is below) The interactive session provides a > prompt, requires a double semicolon ;; identifier at the end of a code snippet to force evaluation, and returns the names (if any) and types of resulting definitions and values. To access the F# prompt, in VS2010 Goto View –> Other Window then F# Interactive. Once you have the interactive window type in the following expression: 2+3;; as shown in the screenshot below: I hope this guide helps you get started with the language, please check out the following books for further information. F# Books for further reading   Foundations of F# Author: Robert Pickering An introduction to functional programming with F#. Including many samples, this book walks through the features of the F# language and libraries, and covers many of the .NET Framework features which can be leveraged with F#.       Functional Programming for the Real World: With Examples in F# and C# Authors: Tomas Petricek and Jon Skeet An introduction to functional programming for existing C# developers written by Tomas Petricek and Jon Skeet. This book explains the core principles using both C# and F#, shows how to use functional ideas when designing .NET applications and presents practical examples such as design of domain specific language, development of multi-core applications and programming of reactive applications.

    Read the article

  • Maps for Business: Generating Valid Signatures

    Maps for Business: Generating Valid Signatures This video shows developers how to generate signed requests to Google Maps for Business Web Services such as the Geocoding API. It also points Maps for Business developers to pertinent documentation like developers.google.com and shows an example of the Maps for Business Welcome Letter. To get a response from the Maps for Business API Web Services, developers need to know: how to find their Maps For Business client ID; how to include that client ID in requests; how to retrieve their Maps for Business digital signing key from the Maps for Business welcome materials; and how to use the key to generate a signature. The video will walk developers through the signature generation steps using Python, and provides pointers to examples in other languages. From: GoogleDevelopers Views: 7 0 ratings Time: 10:42 More in Science & Technology

    Read the article

  • Expected sprint completion rate and load in scrum?

    - by bjarkef
    Recently at work there have been increased focus on completion rate and load on the developers in our sprints. With completion rate I mean, if we plan 20 user stories for a sprint, what percentage of these user stories are closed at the end of the sprint. And with load I mean, if we have a sprint with 3 developers of 60 hours each, i.e. 180 hours for the sprint, how many hours worth of user stories do we schedule for the sprint. So I am really interested in others experience with this, I guess this is something everybody working with scrum deals with. My question is, what completion rate and load are expected/usual, and how are your team doing with respect to these parameters?

    Read the article

  • Rendering CV template with XeLaTex

    - by jacob
    Installed kubuntu on thursday Installed LaTeX on my kubuntu machine, using full Compiled an old document and it worked fine Downloaded a CV template from http://www.latextemplates.com/template/two-column-one-page-cv Compiled it, got error Fatal fontspec error: "cannot-use-pdftex" The fontspec package requires either XeTeX or LuaTeX to function. You must change your typesetting engine to, e.g., "xelatex" or "lualatex" instead of plain "latex" or "pdflatex". See the fontspec documentation for further information. For immediate help type H . Installed XeLaTex using this guide http://ledgersmb.org/faq/xelatex i.e. 7 Installed texlive-xetex that includes xelatex apt-get install texlive-xetex apt-get install liblatex-{driver,encode,table}-perl apt-get install libtemplate-plugin-latex-per 8) Compiled CV template again, did not work. Related: No Xelatex in texlive 2012 Excuse me if my question is not clear enough, I'm new to linux.

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Does Ubuntu generally post timely security updates?

    - by Jo Liss
    Concrete issue: The Oneiric nginx package is at version 1.0.5-1, released in July 2011 according to the changelog. The recent memory-disclosure vulnerability (advisory page, CVE-2012-1180, DSA-2434-1) isn't fixed in 1.0.5-1. If I'm not misreading the Ubuntu CVE page, all Ubuntu versions seem to ship a vulnerable nginx. Is this true? If so: I though there was a security team at Canonical that's actively working on issues like this, so I expected to get a security update within a short timeframe (hours or days) through apt-get update. Is this expectation -- that keeping my packages up-to-date is enough to stop my server from having known vulnerabilities -- generally wrong? If so: What should I do to keep it secure? Reading the Ubuntu security notices wouldn't have helped in this case, as the nginx vulnerability was never posted there.

    Read the article

  • Google I/O 2012 - Powering Your Application's Data using Google Cloud Storage

    Google I/O 2012 - Powering Your Application's Data using Google Cloud Storage Navneet Joneja, Nathan Herring Since opening its doors to all developers at Google I/O last year, the Google Cloud Storage team has shipped several features that let you use Google Cloud Storage for a variety of advanced use cases. This session will open with a quick introduction to the product, and quickly shift focus to implementing a variety of advanced applications using new features in Google Cloud Storage. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 48 1 ratings Time: 58:32 More in Science & Technology

    Read the article

  • Haskell web frameworks survey

    - by Phuc Nguyen
    There are several web frameworks for Haskell like Happstack, Snap, and Yesod, and probably a few more. In what aspects do they differ from each other? For example: features (e.g. server only, or also client scripting, easy support for different kinds of database) maturity (e.g. stability, documentation quality) scalability (e.g. performance, handy abstraction) main targets Also, what are examples of real-world sites / web apps using these frameworks? Many thanks.

    Read the article

  • What is a name for a job where you do system analysis, project management and data diagramming?

    - by David Archer
    In the last 4 months I've been able to manage a team and step away from the coding for a bit. I've been planning the system in full (both System Analysis and project managing, alongside action and data diagramming) writing the technical documentation, the code's architecture, keeping track of the other guys doing the actual coding, QA, bug reports and dealing with clients. I had to take two days' training on node.js just to see if it would be suitable for a project we were considering. Is there a name for this job? Project Manager and Systems Architect don't quite seem to have the same stuff, and IT manager seems way off. I only want to know so that I can get some qualification towards it and try to move into this kind of work full-time.

    Read the article

  • MVVM Light V4 preview 2 (BL0015) #mvvmlight

    - by Laurent Bugnion
    Over the past few weeks, I have worked hard on a few new features for MVVM Light V4. Here is a second early preview (consider this pre-alpha if you wish). The features are unit-tested, but I am now looking for feedback and there might be bugs! Bug correction: Messenger.CleanupList is now thread safe This was an annoying bug that is now corrected: In some circumstances, an exception could be thrown when the Messenger’s recipients list was cleaned up (i.e. the “dead” instances were removed). The method is called now and then and the exception was thrown apparently at random. In fact it was really a multi-threading issue, which is now corrected. Bug correction: AllowPartiallyTrustedCallers prevents EventToCommand to work This is a particularly annoying regression bug that was introduced in BL0014. In order to allow MVVM Light to work in XBAPs too, I added the AllowPartiallyTrustedCallers attribute to the assemblies. However, we just found out that this causes issues when using EventToCommand. In order to allow EventToCommand to continue working, I reverted to the previous state by removing the AllowPartiallyTrustedCallers attribute for now. I will work with my friends at Microsoft to try and find a solution. Stay tuned. Bug correction: XML documentation file is now generated in Release configuration The XML documentation file was not generated for the Release configuration. This was a simple flag in the project file that I had forgotten to set. This is corrected now. Applying EventToCommand to non-FrameworkElements This feature has been requested in order to be able to execute a command when a Storyboard is completed. I implemented this, but unfortunately found out that EventToCommand can only be added to Storyboards in Silverlight 3 and Silverlight 4, but not in WPF or in Windows Phone 7. This obviously limits the usefulness of this change, but I decided to publish it anyway, because it is pretty damn useful in Silverlight… Why not in WPF? In WPF, Storyboards added to a resource dictionary are frozen. This is a feature of WPF which allows to optimize certain objects for performance: By freezing them, it is a contract where we say “this object will not be modified anymore, so do your perf optimization on them without worrying too much”. Unfortunately, adding a Trigger (such as EventTrigger) to an object in resources does not work if this object is frozen… and unfortunately, there is no way to tell WPF not to freeze the Storyboard in the resources… so there is no way around that (at least none I can see. In Silverlight, objects are not frozen, so an EventTrigger can be added without problems. Why not in WP7? In Windows Phone 7, there is a totally different issue: Adding a Trigger can only be done to a FrameworkElement, which Storyboard is not. Here I think that we might see a change in a future version of the framework, so maybe this small trick will work in the future. Workaround? Since you cannot use the EventToCommand on a Storyboard in WPF and in WP7, the workaround is pretty obvious: Handle the Completed event in the code behind, and call the Command from there on the ViewModel. This object can be obtained by casting the DataContext to the ViewModel type. This means that the View needs to know about the ViewModel, but I never had issues with that anyway. New class: NotifyPropertyChanged Sometimes when you implement a model object (for example Customer), you would like to have it implement INotifyPropertyChanged, but without having all the frills of a ViewModelBase. A new class named NotifyPropertyChanged allows you to do that. This class is a simple implementation of INotifyPropertyChaned (with all the overloads of RaisePropertyChanged that were implemented in BL0014). In fact, ViewModelBase inherits NotifyPropertyChanged. ViewModelBase does not implement IDisposable anymore The IDisposable interface and the Dispose method had been marked obsolete in the ViewModelBase class already in V3. Now they have been removed. Note: By this, I do not mean that IDisposable is a bad interface, or that it shouldn’t be used on viewmodels. In the contrary, I know that this interface is very useful in certain circumstances. However, I think that having it by default on every instance of ViewModelBase was sending a wrong message. This interface has a strong meaning in .NET: After Dispose has been executed, the instance should not be used anymore, and should be ready for garbage collection. What I really wanted to have on ViewModelBase was rather a simple cleanup method, something that can be executed now and then during runtime. This is fulfilled by the ICleanup interface and its Cleanup method. If your ViewModels need IDisposable, you can still use it! You will just have to implement the interface on the class itself, because it is not available on ViewModelBase anymore. What’s next? I have a couple exciting new features implemented already but that need more testing before they go live… Just stay tuned and by MIX11 (12-14 April 2011), we should see at least a major addition to MVVM Light Toolkit, as well as another smaller feature which is pretty cool nonetheless More about this later! Happy Coding Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Microsoft Generation 4 Datacenter using ITPACs

    - by Eric Nelson
    Microsoft is continuing to make significant investments in Datacenter technology and is focused on solving issues such as long lead times, significant up-front costs and over capacity. Enter the world of modular Datacenters and ITPACs – IT Pre-Assembled Components. In simple terms – air handling and IT units which are pre-assembled (looking somewhat like a container) and then installed on concrete bases. Each unit can hold  between 400 and 2500 servers (which means many more virtual machines depending on your density) Kevin Timmons’, manager of the datacenter operations team, just posted a great post digging into the detail One Small Step for Microsoft’s Cloud, Another Big Step for Sustainability which includes a short video on how we build one of these ITPACs. You might also want to check out this video from the PDC:

    Read the article

  • What is PowerPivot?

    - by Enrique Lima
    Let’s start with it is a great way to be able to visualize data and transform to information.  It is intended to be a self-service business intelligence tool provided as an add-in for a tool we already know … Microsoft Excel ... 2010. If you have been wondering where to go? what to do? and how do I? I am including links that will help you be on your way to better understanding of the topic and tool. (Links to TechNet documentation) Introduction to PowerPivot Walkthrough: Create your first PowerPivot Workbook Get to know the UI PowerPivot for Excel Central Get some samples

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • Ubuntu Server 14.04 Apache 2.4 TLSv1.2/TLSv1.1 support

    - by Jan
    Recently I upgraded my server from Ubuntu Server 12.04 to Ubuntu Server 14.04 by means of a new installation from scratch. Only one problem remains. In 12.04 Apache 2.2 with mod_ssl supported TLS version 1, 1.1 and 1.2. After upgrading to 14.04 and Apache 2.4, Apache only supports TLS version 1, support for 1.1 and 1.2 is missing. I followed both the migration guide for 2.2 - 2.4 (no changes to the mod_ssl settings) as well as the documentation of mod_ssl regarding the SSLProtocol configuration option. Both SSLProtocol TLSv1.2 TLSv1.1 TLSv1 and SSLProtocol TLSv1.2 do not work. Is there any way to convince Apache to support the new TLS versions as well? Problem seems to be solved now without any change from my side. Apparently libssl was initially compiled without TLSv1.1/TLSv1.2 support.

    Read the article

  • Microsoft Codename Houston

    - by kaleidoscope
    On one of the final talks about SQL Azure in Day 3 of PDC09, David Robinson, Senior PM on the Azure team, announced a project codenamed ‘Houston’ which is basically a Silverlight equivalent of SQL Server Management Studio. The concept comes from the SQL Azure being within the cloud, and if the only way to interact with it is by installing SSMS locally then it does not feel like a consistent story. From the limited preview, it only contains the basics but it clearly lets you create tables, stored procedures and views, edit them, even add data to tables in a grid view reminiscent of Microsoft Access. The UI was based around the standard ribbon bar, object window on the left and working pane on the right. As of now this tool is still pre-alpha and it seems like a basic tool that will facilitate rapid database development on cloud. When asked about general availability, no dates were given but calendar 2010 was indicated as the target. More information can be found at:      http://sqlfascination.com/2009/11/20/pdc-09-day-3-sql-azure-and-codename-houston-announcement/   Tinu, O

    Read the article

  • Microsoft MVP for the year 2011

    - by vik20000in
    It’s been three year in a row now. I am MVP for the year 2011 also. It feels so great to get the news that I have been MVP again. Very big thanks to the MVP Team, My MVP Lead and Microsoft for giving the MVP award to me for 3 year in a row. Also a great thanks to all the friends, Family, developers, community members that have worked with me. Without your support this would not have been possible. I will try and continue the work in 2011 alsoVikram

    Read the article

  • Persisting settings without using Options dialog in Visual Studio

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2013/11/02/persisting-settings-without-using-options-dialog-in-visual-studio.aspxIn one of my previous blog post we have seen persisting settings using Visual Studio's options dialog. Visual Studio options has many advantages in automatically persisting user options for you. However, during our latest Team Rooms extension development, we decided to provide our users; ability to use our preferences directly from Team Explorer. The main reason was that we had only one simple option for user and we thought it is cumbersome for user to go to Tools –> Options dialog to change this. Another reason was, we wanted to highlight this setting to user as soon as he is using our extension.   So if you are in such a scenario where you do not want to use VS options window, but still would like to persist the settings, this post will guide you through. Visual Studio SDK provides two ways to persist settings in your extensions. One is using DialogPage as shown in my previous post. Another way is to use by implementing IProfileManager interface which I will explain in this post. Please note that the class implementing IProfileManager should be independent class. This is because, VS instantiates this class during Tools –> Import and Export Settings. IProfileManager provides 2 different sets of methods (total 4 methods) to persist the settings. They are LoadSettingsFromXml and SaveSettingsToXml – Implement these methods to persist settings to disk from VS settings storage. The VS will persist your settings along with other options to disk. LoadSettingsFromStorage and SaveSettingsToStorage – Implement these methods to persist settings to local storage, usually it be registry. VS calls LoadSettingsFromStorage method when it is initializing the package too. We are going to use the 2nd set of methods for this example. First, we are creating a separate class file called UserOptions.cs. Please note that, we also need to implement IComponent, which can be done by inheriting Component along with IProfileManager. [ComVisible(true)] [Guid("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")] public class UserOptions : Component, IProfileManager { private const string SUBKEY_NAME = "TForVS2013"; private const string TRAY_NOTIFICATIONS_STRING = "TrayNotifications"; ... } Define the property so that it can be used to set and get from other classes. public bool TrayNotifications { get; set; } Implement the members of IProfileManager. public void LoadSettingsFromStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME)) { if (reg != null) { // Key already exists, so just update this setting. TrayNotifications = Convert.ToBoolean(reg.GetValue(TRAY_NOTIFICATIONS_STRING, true)); } } } catch (TeamRoomException exception) { TrayNotifications = true; ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void LoadSettingsFromXml(IVsSettingsReader reader) { reader.ReadSettingBoolean(TRAY_NOTIFICATIONS_STRING, out _isTrayNotificationsEnabled); TrayNotifications = (_isTrayNotificationsEnabled == 1); } public void ResetSettings() { } public void SaveSettingsToStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME, true)) { if (reg != null) { // Key already exists, so just update this setting. reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } else { reg = Package.UserRegistryRoot.CreateSubKey(SUBKEY_NAME); reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } } } catch (TeamRoomException exception) { ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void SaveSettingsToXml(IVsSettingsWriter writer) { writer.WriteSettingBoolean(TRAY_NOTIFICATIONS_STRING, TrayNotifications ? 1 : 0); } Let me elaborate on the method implementation. The Package class provides UserRegistryRoot (which is HKCU\Microsoft\VisualStudio\12.0 for VS2013) property which can be used to create and read the registry keys. So basically, in the methods above, I am checking if the registry key exists already and if not, I simply create it. Also, in case there is an exception I return the default values. If the key already exists, I update the value. Also, note that you need to make sure that you close the key while exiting from the method. Very simple right? Accessing and settings is simple too. We just need to use the exposed property. UserOptions.TrayNotifications = true; UserOptions.SaveSettingsToStorage(); Reading settings is as simple as reading a property. UserOptions.LoadSettingsFromStorage(); var trayNotifications = UserOptions.TrayNotifications; Lastly, the most important step. We need to tell Visual Studio shell that our package exposes options using the UserOptions class. For this we need to decorate our package class with ProvideProfile attribute as below. [ProvideProfile(typeof(UserOptions), "TForVS2013", "TeamRooms", 110, 110, false, DescriptionResourceID = 401)] public sealed class TeamRooms : Microsoft.VisualStudio.Shell.Package { ... } That's it. If everything is alright, once you run the package you will also see your options appearing in "Import Export settings" window, which allows you to export your options.

    Read the article

  • SQL Server 2012 RTM Cumulative Update #10 is available!

    - by AaronBertrand
    The SQL Server team has released CU #10 for SQL Server 2012 RTM. KB article: http://support.microsoft.com/kb/2891666 Build # is 11.0.2420 This build has 4 fixes For most customers, this cumulative update hardly justifies the download, never mind patching and regression testing, at least IMHO. Of the four fixes, two involve SSAS, one involves SSRS, and one involves the database engine tuning advisor. Relevant for builds 11.0.2100 -> 11.0.2419. Do not attempt to install on SQL Server 2012 SP1 (any...(read more)

    Read the article

  • Relationship between TDD and Software Architecture/Design

    - by Christopher Francisco
    I'm new to TDD and have been reading the theory since applying it is more complicated than it sounds when you're learning by yourself. As far as I know, the objective is to write test cases for each requirement and run the test so it fails (to prevent a false positive). Afterward, you should write the minimum amount of code that can pass the test and move to the next one. That being said, is it true that you get a fast development, but what about the code itself? this theory makes me think you are not considering things like abstraction, delegation of responsibilities, design patterns, architecture and others since you're just writing "the minimum amount of code that can pass the test". I know I'm probably wrong because if this were true, we'd have a lot of crappy developers with poor software architecture and documentation so I'm asking for a guide here, what's the relationship between TDD and Software Architecture/Design?

    Read the article

  • Wine Security - Improvement by second user account?

    - by F. K.
    Team, I'm considering installing wine - but still hesitant for security reasons. As far as I found out, malicious code could reach ~/.wine and all my personal data with my user-priviledges - but not farther than that. So - would it be any safer to create a second user account on my machine and install wine there? That way, the second user would only have reading rights to my files. Is there a way to install wine totally confined to that user - so that I can't execute .exe files from my original account? Thanks in advance! PS - I'm running Ubuntu 11.10 64bit if that matters.

    Read the article

  • Jupiter in Ubuntu 13.10 (Laptop Overheating)

    - by Daniel Pacheco
    I was wondering if Jupiter (Interface for display, power and device control) will work in Ubuntu 13.10, because my laptop (Toshiba Satellite C855D, AMD A6-4400M with Radeon HD Graphics running Ubuntu 13.04 x64) keeps overheating, I tried some other tools, like laptop-mode-tools or TLP, none of those work, not at all. Jupiter was the only option and it's supposedly discontinued, the version I'm using is being maintained by JoliCloud team, but they told me they're not sure if it will work with 13.10... If it doesn't work, I'm definitely not upgrading, since overheating is a major issue for me... Thanks in advance!

    Read the article

  • Collision library for bullet hell in Python

    - by darkfeline
    I am making a bullet hell game in Python and am looking for a suitable collision library, taking the following into consideration: The library should do 2D polygon collision. It should be very fast. As a bullet hell game, I expect to do collision checks between hundreds, likely thousands of objects every frame at a consistent 60fps. Good documentation Permissive license (like MIT, not GPL) I am also considering writing my own library in C/C++ and wrapping with python ctypes in the event that no such library exists, though I do not have experience with collision detection algorithms, so I am not sure if this would be more trouble than it's worth. Could someone provide some guidance on this matter?

    Read the article

  • VirtualBox

    - by DesigningCode
    I was wanting to play around with something in a VM the other day.  I was curious what was available for free, if anything, for windows.   I quickly came across Virtual Box  ( http://www.virtualbox.org/ ).   Downloaded, Installed. No Problem!  Works really nicely.   It was commercial software (by sun (now oracle)) that turned open source.   In terms of a license it says :- In summary, the VirtualBox PUEL allows you to use VirtualBox free of charge for personal use or, alternatively, for product evaluation. An interesting feature it has is built in RDP.   Which is useful if you have a guest OS that doesn’t support RDP.   Speaking of RDP…..  which I will in my next blog post… I learnt something REALLY useful the other day.

    Read the article

< Previous Page | 735 736 737 738 739 740 741 742 743 744 745 746  | Next Page >