Search Results

Search found 14719 results on 589 pages for 'optimization level'.

Page 382/589 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • MVVM Light V4 preview (BL0014) release notes

    - by Laurent Bugnion
    I just pushed to Codeplex an update to the MVVM Light source code. This is an early preview containing some of the features that I want to release later under the version 4. If you find these features useful for your project, please download the source code and build the assemblies. I will appreciate greatly any issue report. This version is labeled “V4.0.0.0/BL0014”. The “BL” string is an old habit that we used in my days at Siemens Building Technologies, called a “base level”. Somehow I like this way of incrementing the “base level” independently of any other consideration (such as alpha, beta, CTP, RTM etc) and continue to use it to tag my software versions. In Microsoft parlance, you could say that this is an early CTP of MVVM Light V4. Caveat The code is unit tested, but as we all know this does not mean that there are no bugs This code has not yet been used in production. Again, your help in testing this is greatly appreciated, so please report all bugs to me! What’s new? The following features have been implemented: Misc Various “maintenance work”. All WPF assemblies (that is .NET35 and .NET4) now allow partially trusted callers. It means that you can use them in am XBAP in partial trust mode. Testing Various test updates Added Windows Phone 7 unit tests Note: For Windows Phone 7, due to an issue in the unit test framework, not all tests can be executed. I had to isolate those tests for the moment. The error was reported to Microsoft. ViewModelBase The constructor is now public to allow serialization (especially useful on the phone to tombstone the state). ViewModelBase.MessengerInstance now returns Messenger.Default unless it is set explicitly. Previously, MessengerInstance was returning null, which was complicating the code. Two new ways to raise the PropertyChanged event have been added. See below for details. Messenger Updated the IMessenger interface with all public members from the Messenger class. Previously some members were missing. A new Unregister method is now available, allowing to unregister a recipient for a given token. RelayCommand RaiseCanExecuteChanged now acts the same in Windows Presentation Foundation than in Silverlight. In previous versions, I was relying on the CommandManager to raise the CanExecuteChanged event in WPF. However, it was found to be too unreliable, and a more direct way of raising the event was found preferable. See below for details. Raising the PropertyChanged event A very much requested update is now included: the ability to raise the PropertyChanged event in a viewmodel without using “magic strings”. Personally, I don’t see strings as a major issue, thanks to two features of the MVVM Light Toolkit: In the DEBUG configuration, every time that the RaisePropertyChanged method is called, the name of the property is checked against all existing properties of the viewmodel. Should the property name be misspelled (because of a typo or refactoring), an exception is thrown, notifying the developer that something is wrong. To avoid impacting the performance, this check is only made in DEBUG configuration, but that should be enough to warn the developers in case they miss a rename. The property name is defined as a public constant in the “mvvminpc” code snippet. This allows checking the property name from another class (for example if the PropertyChanged event is handled in the view). It also allows changing the property name in one place only. However, these two safeguards didn’t satisfy some of the users, who requested another way to raise the PropertyChanged event. In V4, you can now do the following: Using lambdas private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(() => MyProperty); } } This raises the property changed event using a lambda expression instead of the property name. Light reflection is used to get the name. This supports Intellisense and can easily be refactored. You can also broadcast a PropertyChangedMessage using the Messenger.Default instance with: private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(() => MyProperty, oldValue, value, true); } } Using no arguments When the RaisePropertyChanged method is called within a setter, you can also omit the property name altogether. This will fail if executed outside of the setter however. Also, to avoid confusion, there is no way to broadcast the PropertyChangedMessage using this syntax. private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(); } } The old way Of course the “old” way is still supported, without broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); } } And with broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(MyPropertyName, oldValue, value, true); } } Performance considerations It is notorious that using reflection takes more time than using a string constant to get the property name. However, after measuring for all platforms, I found the differences to be very small. I will measure more and submit the results to the community for evaluation, because some of the results are actually surprising (for example, using the Messenger to broadcast a PropertyChangedMessage does not significantly increase the time taken to raise the PropertyChanged event and update the bindings). For now, I submit this code to you, and would be delighted to hear about your own results. Raising the CanExecuteChanged event manually In WPF, until now, the CanExecuteChanged event for a RelayCommand was raised automatically. Or rather, it was attempted to be raised, using a feature that is only available in WPF called the CommandManager. This class monitors the UI and when something occurs, it queries the state of the CanExecute delegate for all the commands. However, this proved unreliable for the purpose of MVVM: Since very often the value of the CanExecute delegate changes according to non-UI events (for example something changing in the viewmodel or in the model), raising the CanExecuteChanged event manually is necessary. In Silverlight, the CommandManager does not exist, so we had to raise the event manually from the start. This proved more reliable, and I now changed the WPF implementation of the RaiseCanExecuteChanged method to be the exact same in WPF than in Silverlight. For instance, if a command must be enabled when a string property is set to a value other than null or empty string, you can do: public MainViewModel() { MyTestCommand = new RelayCommand( () => DoSomething(), () => !string.IsNullOrEmpty(MyProperty)); } public const string MyPropertyName = "MyProperty"; private string _myProperty = string.Empty; public string MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); MyTestCommand.RaiseCanExecuteChanged(); } } Logo update I made a minor change to the logo: Some people found the lack of the word “light” (as in MVVM Light Toolkit) confusing. I thought it was cool, because the feather suggests the idea of lightness, however I can see the point. So I added the word “light” to the logo. Things should be quite clear now. What’s next? This is only the first of a series of releases that will bring MVVM Light to V4. In the next weeks, I will continue to add some very requested features and correct some issues in the code. I will probably continue this fashion of releasing the changes to the public as source code through Codeplex. I would be very interested to hear what you think of that, and to get feedback about the changes. Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Quitting a small start-up where you are a primary developer?

    - by programmx10
    Just curious to hear from other people who may have been in similar situations. I work for a small startup (very small) where I am the main developer for a major part of the app they are building, the other dev they have does a different area of work than I do so couldn't take over my part. I've been with the company 5 months, or so, but I am looking at going to a more stable company soon because its just getting to be too much stress, overtime, pressure, etc for too little benefit and I miss working with other developers who can help out on a project. The guy is happy with my work and I think I've helped them get pretty far but I've realized I just don't like being this much "on the edge" as its hard to tell what the direction of the company is going to be since its so new. Also, even though I'm the main dev for the project, I would still only consider myself a mid-level dev and am selling myself as such for the new job search. Just to add more detail, I'm not a partner or anything in the company and this was never discussed, so I just work on a W2 (with no benefits of course). I work at home so that makes it easier to leave, I guess, but I don't want to just screw the guy over but also don't want to be tied in for too long. Obviously I would plan to give 2 weeks notice at least, but should I give more? How should I bring up the subject because I know its going to be a touchy thing to bring up. Any advice is appreciated UPDATE: Thanks everyone for posting on this, I have now just completed the process of accepting an offer with a larger company and quitting the startup. I have given 2 weeks notice and have offered to make myself available after that if needed, basically its a really small company at this point so it would only be 1 dev that I would have to deal with... anyways, it looks like it may work out well as far as me maintaining a good relationship with the founder for future work together, I made it out to be more of a personal / lifestyle issue than about their flaws / shortcomings which definitely seems to help in leaving on a good note

    Read the article

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • Directx vs XNA - Which is better for me? [closed]

    - by tristo
    Recently I got Visual Studio 2012 from visual studio 2010, although did not expect Visual Studio to 2012 to designed the way it was. Anyway I am pleased with some of VS 2012 technology and have moved all of my projects to it. At this point of time since I got VS 2012 I have been into making windows applications and other non-game activities. ALTHOUGH have recently gotten into the spirit of game development and I am planning to make a 3d comical game, shader effects, not too complicated meshes, but it requires alot of lighting effects to emphasise certain parts of the game. When I was using VS 2010 I had a great time making 2d games with XNA, it uses a great language, and has a very awesome system. But I no longer have XNA with me, and the workarounds described in stackoverflow always gives me errors while using xna. Anyway it seems that microsoft have stuffed themselves up with xna anyway with the weirdness of Windows 8, and it being only avaliabe on pc and xbox. Due to these reasons I have decided to work with Directx and Direct3d to produce my new game, although the overflowing credits after each directx game gives me the shivers, and the low-level coding of directx also puts me on thin ice with my games, left in a confusional mess with what decision I should make. I don't know anything about directx or direct3d. I am an indie developer, but I am planning to take on alot of professional aspects of games. I don't have heaps of time(2-3 hours a day) I don't mind the complexity of how directx works, as long as I can learn how to make the fundementals of a game in a week. I am also unsure if directx is really for my situation, and keep with xna game development. Anyone can tell me the best technology for me would be great.

    Read the article

  • Using texture() in combination with JBox2D

    - by Valentino Ru
    I'm getting some trouble using the texture() method inside beginShape()/endShape() clause. In the display()-method of my class TowerElement (a bar which is DYNAMIC), I draw the object like following: void display(){ Vec2 pos = level.getLevel().getBodyPixelCoord(body); float a = body.getAngle(); // needed for rotation pushMatrix(); translate(pos.x, pos.y); rotate(-a); fill(temp); // temp is a color defined in the constructor stroke(0); beginShape(); vertex(-w/2,-h/2); vertex(w/2,-h/2); vertex(w/2,h-h/2); vertex(-w/2,h-h/2); endShape(CLOSE); popMatrix(); } Now, according to the API, I can use the texture() method inside the shape definition. Now when I remove the fill(temp) and put texture(img) (img is a PImage defined in the constructor), the stroke gets drawn, but the bar isn't filled and I get the warning texture() is not available with this renderer What can I do in order to use textures anyway? I don't even understand the error message, since I do not know much about different renderers.

    Read the article

  • Learning curve for web development

    - by refro
    At the moment our team has a huge challenge, we're being asked to deliver a new GUI for an embedded controller. The deadline is very tight and is set on April 2013. Our team is very diverse, some people are on the level of functional programming (mostly C), others (including myself) have mastered object oriented programming (C++, C#). We built a prototype for Android, although it has its quirks, it is mostly just OO. For the future there is a wish to support multiple platforms (Windows, Android, iOS). In my opinion a HTML5 app with a native app shell is the way to go. When gathering more information on the frameworks to use etc., it became obvious to me a paradigm shift is needed. None of us have a web background so we need to learn from the ground up. The shift from functional to OO took us about 6 months to become productive (and some of the early subsystems were rewritten because they were a total mess). Can we expect the learning curve to be similar? Can this be pulled off with a web app? (My feeling says it will already be hard to pull off as a native app which is at the edge of our comfort zone).

    Read the article

  • Is There a Cloud Over OpenWorld?

    - by Tony Berk
    If you have been to OpenWorld in the past, you know it can be overwhelming or at least a bit "large." If this is your first time at OpenWorld, get ready! You are in for a big (or I should say HUGE) treat. The first thing you'll notice when you get to San Francisco is there are a lot of people, buses with "Oracle" posters, large exhibit halls filled with demos, games and tchotchkes from vendors with hot new solutions, and then there are the sessions. Yes, in fact there are over 2000 sessions. How can you possibly sort through 2000 sessions to find the best 20 or so for you? Which are the 1% for you? We will try to help with some insight over the next few weeks.  I'm going start at the highest level. Up in the Clouds! Since I know many people are looking for an update on The Oracle Cloud. We will drill down into the cloud and other topics for CRM and Customer Experience sessions in the next set of posts. Below is a list of some of the Oracle executive keynotes during OpenWorld highlighting The Oracle Cloud and applications related topics (the full list is here). In these sessions you will get details on Oracle's strategy and how Oracle is changing the industry to help our customers be more efficient, effective and innovative. Sunday, September 30 6:00pm - 7:00pm Larry Ellison: Hardware and Software, Engineered to Work Together: Why it's a Different Approach Tuesday, October 2 8:45am - 9:45am Thomas Kurian: The Oracle Cloud: Oracle's Cloud Platform and Application's Strategy Tuesday, October 2 3:30pm - 4:30pm Larry Ellison: The Oracle Cloud: Where Social is Built in Thursday, October 4 9:45am - 10:45am Mark Hurd: See More, Act Faster: Oracle Business Analytics We encourage you to also join the keynotes on the Oracle Database and Cloud Infrastructure and the fascinating partner keynotes, as well. Check the full list on the OpenWorld site. Oh, if you haven't registered yet, what are you waiting for? OpenWorld Registration Details.

    Read the article

  • Getting into the details of game engine programming

    - by Darkslash
    I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have OpenGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc, etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesn't teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? I'm sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Should I make my project free software?

    - by SkyDan
    The story Over the last couple month I have been working on a pretty big project. It's an enterprise-level software, I designed to be used at a local gym, but I believe it can be used in other places, where things like keeping track of clients, attendances, purchases and payments are required. The problem Well recently, I started to think on how to mature this project from being home-made. Not just because I want my project to grow but also because I would like to have some gain from it. The solutions? And here I saw 2 paths: License the software under some restricted license and try to sell the software to other business around. This way I can get some money for college (I am a high school junior right now) License the software under some free license, publish it on GitHub or something, and try to engage other developers to participate in the project. This way I get experience of working in a team and a better chance that the project will keep growing. The latter would be a good + for my resume, when I'll trying to find a job. So far both ways seem pretty exciting and beneficial to me. The first one offers a good college career, while the second one offers some additional experience and the project's growth. The questions Can anyone point to some other +/- of these 2 options? What would the better option in my situation and why? Or are there other options?

    Read the article

  • How can I install from a 9.04 live USB/DVD?

    - by bstpierre
    I have a 9.04 (Jaunty) ISO burned to a USB stick; it appears to be a "live DVD". When I boot from it, I get a GRUB menu listing: Ubuntu, with Linux 2.6.35-generic (This matches the system currently installed on the HDD?) Ubuntu, with Linux 2.6.35-generic (recovery mode) Memory test Ubuntu 9.04, kernel 2.6.28-11-generic (on /dev/sda1) Ubuntu 9.04, kernel 2.6.28-11-generic (recovery mode) (on /dev/sda1) Ubuntu 9.04, memtest86+ (on /dev/sda1) When I select Ubuntu 9.04, kernel 2.6.28-11-generic (on /dev/sda1), I arrive at the desktop of a 9.04 system. I want to wipe the HDD clean and install 9.04. (Upgrading to something newer is not an option; this version is required by a legacy application.) How can I install from this live USB image? I vaguely remember some incantation that I should be able to use in the booted system, but my google-fu is broken at the moment. I'm comfortable with low-level commands, so if you want to recommend a more hard-core strategy, I'm willing to roll with it without requiring a ton of detail...

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • The Increasing Focus on Architecture

    - by Bob Rhubart
    If you follow my updates on Twitter or on the OTN ArchBeat page on Facebook you have probably noticed that I'm a regular reader of Joe McKendrick's SOA blog on ZDNet. Usually I'm content to simply share a link on my social networks when I find one of McKendrick's posts interesting. But with a recent post, In the cloud era, let's start calling IT what it is: 'Innovation Team', McKendrick hit on a point that warrants more than a quick link: "IT is no longer just a department full of people who code, build and maintain systems. IT is the business partner that plans and strategizes what types of technology solutions the business needs to move forward." Of course, what McKendrick is describing is an increased focus on architecture. Assuming that McKendrick's assessment is correct — and I do — that expanding focus, from coding, building, and maintaining systems to planning and strategizing technology solutions that serve the business, isn't limited to the organizational level. The individual roles within the IT organization will also have to shift to a more broadly architectural mindset. McKendrick's post references Dr. Irving Wladawsky-Berger's assessment of cloud computing as a critical "third model" of computing to emerge in the 50-year history of Information Technology. As computing itself evolves, the underlying roles that make computing possible must evolve accordingly. That evolution will be defined by an increased focus on architecture.

    Read the article

  • How to debug a fatal system crash - [graphical loop DOTA2]?

    - by Huw
    Whilst playing DOTA2, I occasionally and apparently randomly seem to be experiencing a fatal crash where the display freezes fixed and the audio loops over approx the last .5 of a second. Now, I'm interested in resolving this - but my trouble is I don't know where to start. The error appears non-reproducible (I've tried returning to games and deploying the same combination of events in hopes of pinning to to a certain shader etc), and I don't know which part of 'the stack' it might be coming from. Variables that occur to me: I custom build this system, did I do something wrong - is my PSU not providing enough power to the graphics card? I am running Steam and DOTA under Linux, could this new software have a bug Might it be something to do with my ATI Catalyst graphics drivers Is some other background process interfering I'm usually mid game when this occurs, so i quickly kill the power and reboot (when i'm lucky i can get back in with only 1-2 mins lost!). So my question here relates to logs. Where should I start to look or how might I set up logs to help me pin down a fatal crash of this kind by recording moments up to / before a crash and is this likely to be something I should push Steam to do, or is there something at a system level? Then perhaps I can return with a more specific question and perhaps even a bug report :) Many thanks in advance.

    Read the article

  • Blog Rebranding

    I have been spending more and more time on learning as much as I can on Agile Development and also have been fairly immersed in rolling out TFS 2010 in our environment.  I feel like it is time to talk about some of my experiences.  With that, I am rebranding my blog to focus on these topics.  I am going to start with a bunch of blogs on the process I have gone through getting TFS 2010 configured for our development teams. Last week, Brian Harry was in our office and gave a great talk on the improved tools in TFS 2010 and how Microsoft uses the tools internally.  I followed that up with a high-level overview of the improved out of the box process templates and the process to customize them.  I am definitely very excited about the new features in 2010 and hopefully will keep up my motivation to blog about it.  I am writing my first post right now about the process I went through to build a task progress report based on the user story progress report in the MSF for Agile Development template.  Stay tunedDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Bring on the Cheer, Oracle’s Q3 is Here

    - by Kristin Rose
    November is long gone and December is near… this must mean OPN’s Q2 Winter Wrap-Up is here! Listed below are just a few of the highlights from Oracle’s past three months… Yet another successful Oracle OpenWorld 2012 and the launch of our first ever Oracle PartnerNetwork Exchange program! Get the recap. Our exciting Java Embedded @ JavaOne event. Get the low-down here! The debut of our new Oracle Cloud programs for partners, which have already created some awesome buzz in the Channel. Check out the CRN article, and don’t forget to watch the Cloud Programs Overview video and visit our OPN Cloud Knowledge Zone! On the product front, Oracle’s Sun ZFS Storage Appliance was awarded the 2012 Tech Innovator and Enterprise App Award by CRN. Read the full article. Oracle partner, Hitachi Consulting, reached OPN’s premier Diamond Level status. Read more. Was Oracle part of your September, October or November highlights? If so, leave us a comment below, we’d love to feature your story! Also, don’t forget to share the love by re-tweeting this post on Twitter or “liking” this post on Facebook! Stay Warm, The OPN Communications Team 

    Read the article

  • 25 years old and considering a career change...possible? practical?

    - by mq330
    Hi all, I'm new to this site and new to programming as well. I've spent some time going through an intro cs book that uses python as the language of choice. I find the exercises interesting and engaging and I generally have had a favorable experience programming so far. I've gone through some of the basics with python like writing simple programs, basics of GUIs, manipulating strings, lists, defining functions, etc. And I've always loved technology. Although I've never done any real hardcore programming yet, I was inclined to building websites from a very young age but I never really developed my skills. Now, the thing is I'm 25, I have my bacholors in environmental studies and two masters degrees in urban planning and landscape architecture respectively. I know, it would be quite a departure to pursue a career in programming at this point. Currently, I'm working as a geographic information systems intern. I've taken some GIS classes and have a lot of experience with making maps, doing spatial analysis etc. So what I'm thinking is maybe I can learn some solid programming skills and apply these skills in the field of GIS. From what I've seen, .net languages are the norm in this arena. Could you perhaps provide some guidance to me in terms of what languages I should focus on or courses I should take at this point? What about for building web mapping applications? Also, I was thinking about getting a certificate in programming from a university extension program. Do you think it would be worth it? And furthermore, do you think potential employers would be interested in hiring someone like me (once I get a couple of languages down pretty well) as an intern or in an entry level position? I'll be living in the bay area so I feel that there should be decent opportunities even though I don't have a b.s. in cs.

    Read the article

  • Where is the best place to teach myself a language, and which one?

    - by Lorinda
    Hello, I do not know any programming languages at all. I will self teach myself and need to know the best place to do so where I can learn from a most basic level. Where is a great place to begin learning a language? What language is best to learn first? Is it silly to learn Ruby first? Here, I came across someone saying that learning some of the higher languages can make you 'lazy' if you learn them first. Like Ruby amongst others. For my first language, my husband is advising me to learn Ruby (for his own personal interests). However, I need some independent advice of how to get started and what language I should learn first. I will eventually learn Ruby and then Rails. Four months ago, my husband ordered a text of objective C because he thought he would take it on. I flipped through and it was clearly starting at a place more advanced than where I am coming from. I have dabbled with a Ruby tutorial and I don't get it. I get what I am putting in is what I get, but I don't understand what is leading up to that. I need to know ALL the rules first. I then looked up computer languages and stared researching binary code which helped a lot, but not where I want to start. I don't have a lot of time right now in my life (with four kids) to go back that far. If I were going to school, that would be different. Any advice you could give is most welcomed.

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • How are certain analytics metrics (time on site, etc.) usually distributed?

    - by a barking spider
    I'm not sure if I've come to the right place to ask this question, but I'm gathering some information for a research project. We're trying to design an experiment that'll heavily involve web analytics, and I'm trying to figure out some sensible values of mean +/- standard deviation for the following visitor-level (i.e., visitor 1 spends 2 minutes on site, visitor 2 spends 1 minute -- mean 1.5 +/- 0.71...) metrics: time spent on site page views If time allowed, we would put up the sites and gather the information ourselves, but we have a grant deadline coming up. I realize that even though these the distributions of these quantities are probably going to be heavily skewed towards zero, we'll need some reasonable figures or estimates of these figures in order to do sample size calculations, etc. Anyway, I'm not sure where else I'd turn, and I certainly have had a difficult time finding these values in the prior literature. If someone could direct me to a paper with the right information, or if you have these figures on hand (perhaps taken directly from your logs!) -- that would be amazing, and I'd love to hear from you. Thanks in advance, and even though I'm not allowed to reveal too much, rest assured that this info'll be applied towards a good cause :)

    Read the article

  • Will Unity skills be interchangeable?

    - by Starkers
    I'm currently learning Unity and working my way through a video game maths primer text book. My goal is to create a racing game for WebGL (using Three.js and maybe Physic.js). I'm well aware that the Unity program shields you from a lot of what's going on and a lot of the grunt work attached to developing even a simple video game, but if I power through a bunch of Unity tutorials, will a lot of the skills I learn translate over to other frameworks/engines? I'm pretty proficient at level design with WebGL, and I'm a good 3D modeller. My weaknesses are definitely AI and Physics. While I am rapidly shoring up my math, and while Physics is undeniably interesting there's only so many hours in the day and there's a wealth of engines out there to take care of this sort of thing. AI does appeal to me a lot more, and is a lot more necessary. AI changes drastically from game to game, is tweaked heavily during development, and the physics is a lot more constant. Will leaning AI concepts in Unity allow me to transfer this knowledge pretty much anywhere? Or will I just be paddling up Unity creek with these skills?

    Read the article

  • PARTNER News: Tips and Guidelines from Avago (formerly LSI)

    - by Zeynep Koch
    In this blog write-up we would like to focus our attention to one of our IHV partners, Avago (formerly LSI) . Avago and Oracle have been collaborating at many levels for many years.  At the lowest level, Avago and Oracle engineer solutions to inbox advanced features in our I/O device drivers.  We collaborate to test, verify and optimize these drivers in Oracle Linux with Unbreakable Enterprise Kernel. Both LSI Nytro and Sun F-Series PCIe flash devices are supported inbox in Oracle Linux with Unbreakable Enterprise Kernel. By collaborating early in the engineering design cycle we can find and resolve issues sooner and deliver to the end-customer a fully optimized platform for I/O efficiency and data protection.  Hear more about the partnership and benefits in this podcast  LSI and Oracle Partnership. Avago had also been working on technical whitepaper and video whiteboard to explain some of the optimizations you can achieve by using smart flash cache with Oracle Linux.  Technical Paper: Improve Database Performance Using Sun Flash Accelerator Card, Database Smart Flash Cache and Oracle Linux Video: Improving DB Performance with Database Smart Flash Cache If you want more information about the partnership and product benefits, you can visit the LSI Oracle alliance page. 

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >