Search Results

Search found 11536 results on 462 pages for 'whole foods market'.

Page 419/462 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • How do I organize a GUI application for passing around events and for setting up reads from a shared resource

    - by Savanni D'Gerinel
    My tools involved here are GTK and Haskell. My questions are probably pretty trivial for anyone who has done significant GUI work, but I've been off in the equivalent of CGI applications for my whole career. I'm building an application that displays tabular data, displays the same data in a graph form, and has an edit field for both entering new data and for editing existing data. After asking about sharing resources, I decided that all of the data involved will be stored in an MVar so that every component can just read the current state from the MVar. All of that works, but now it is time for me to rearrange the application so that it can be interactive. With that in mind, I have three widgets: a TextView (for editing), a TreeView (for displaying the data), and a DrawingArea (for displaying the data as a graph). I THINK I need to do two things, and the core of my question is, are these the right things, or is there a better way. Thing the first: All event handlers, those functions that will be called any time a redisplay is needed, need to be written at a high level and then passed into the function that actually constructs the widget to begin with. For instance: drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: (DrawingArea -> IO ()) -> IO VBox createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do graphs <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 return hbox In this case, createStatView builds up a VBox that contains a DrawingArea to graph the data and potentially other widgets. It attaches drawStatData to the realize and exposeEvent events for the DrawingArea. I would do something similar for the TreeView, but I am not completely sure what since I have not yet done it and what I am thinking of would involve replacing the TreeModel every time the TreeView needs to be updated. My alternative to the above would be... drawStatData :: DrawingArea -> MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO () createStatView :: IO (VBox, DrawingArea) ... but in this case, I would arrange createUI like so: createUI :: MVar Core.ST -> (Core.ST -> SetRepWorkout.WorkoutStore) -> IO HBox createUI storeMVar field = do (graphbox, graph) <- createStatView (\area -> drawStatData area storeMVar field) hbox <- hBoxNew False 10 boxPackStart hbox graphs PackNatural 0 on graph realize (drawStatData graph storeMVar field) on graph exposeEvent (do liftIO $ drawStatData graph storeMVar field return ()) return hbox I'm not sure which is better, but that does lead me to... Thing the second: it will be necessary for me to rig up an event system so that various events can send signals all the way to my widgets. I'm going to need a mediator of some kind to pass events around and to translate application-semantic events to the actual events that my widgets respond to. Is it better for me to pass my addressable widgets up the call stack to the level where the mediator lives, or to pass the mediator down the call stack and have the widgets register directly with it? So, in summary, my two questions: 1) pass widgets up the call stack to a global mediator, or pass the global mediator down and have the widgets register themselves to it? 2) pass my redraw functions to the builders and have the builders attach the redraw functions to the constructed widgets, or pass the constructed widgets back and have a higher level attach the redraw functions (and potentially link some widgets together)? Okay, and... 3) Books or wikis about GUI application architecture, preferably coherent architectures where people aren't arguing about minute details? The application in its current form (displays data but does not write data or allow for much interaction) is available at https://bitbucket.org/savannidgerinel/fitness . You can run the application by going to the root directory and typing runhaskell -isrc src/Main.hs data/ or... cabal build dist/build/fitness/fitness data/ You may need to install libraries, but cabal should tell you which ones.

    Read the article

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What do the participants say about the Open Day in South Africa?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 On the 26th of September, a group of students who were specifically selected to attend an Open day at Oracle South Africa, joined us at our offices in Woodmead, Johannesburg. The Conference room was filled with inquisitive minds. What we had in store for them was a detailed presentation about Oracle which was delivered by Zuko - Cluster Leader: Tech GB South Africa. The student’s many questions were all answered especially when we started addressing the opportunities we have and detailed information on our Graduate Programme. Our employees then came to talk about their experience. This allowed all the students to have an integrated learning experience. By inviting the students to walk around our Oracle Offices allowed them to see, talk, experience a bit of the culture and ask more questions. Here is some of the feedback from the attendees: Maxwell Moloi: “The open day truly served its purpose and exceeded expectations in the sense that I got to find out more about Oracle and all the different opportunities it has to offer. The fact that Oracle supplies a full solution to a customer and not just part of it and how the company manages to setup professional development for their employees is what entices me to want to join the rapidly growing team of Oracle.” Nqobile Mabaso: “I found the open day to be quite informative and enlightening because coming from a marketing background I could apply the knowledge I got from varsity to the Company I was able to point out what they do as part of their corporate social responsibility (Oracle recently partnered with the department of education to build a school), how Oracle emphasizes on relationship building because they know they sell to people and not companies and how they offer the full stack of solutions which gives them a competitive advantage over their competitors.” Nondumiso Mvelase: “The Open Day was a wonderful experience for me especially because I have never been part of an Open Day before, so it was absolutely amazing for me. It gave me a good idea of how it is to be part of Oracle. We were served with lovely breakfast and lunch which I enjoyed. I wish the Open Day went on for a whole week. Seeing and hearing from 2013 Graduates, telling us about their experience within Oracle was very inspiring to me. They were encouraging us to work hard if we ever got the opportunity they had. After hearing this from them I will definitely not take it for granted.” Itumeleng Moraka: “Before I walked into the Oracle offices all that was in my mind was databases and cloud storage. I was then surrounded by passionate, enthusiastic and welcoming employees. I came across a positive energy within the multinational company. I realized that Oracle is not a company that operates in survival mode. This may sound idealistic, but they operate in a non-traditional way investing more into innovation, they stay focused on what matters most about where technology is going and at the same time they are not losing sight of how their products make a difference in the world.” For more information on how to be part of the Oracle Graduate Programme please follow us on Facebook! https://www.facebook.com/CampusAtOracle /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • Optimizing drawing of cubes

    - by Christian Frantz
    After googling for hours I've come to a few conclusions, I need to either rewrite my whole cube class, or figure out how to use hardware instancing. I can draw up to 2500 cubes with little lag, but after that my fps drops. Is there a way I can use my class for hardware instancing? Or would I be better off rewriting my class for optimization? public class Cube { public GraphicsDevice device; public VertexBuffer cubeVertexBuffer; public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } private void BuildFace(List<VertexPositionTexture> vertices, Vector3 p1, Vector3 p2) { vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0)); vertices.Add(BuildVertex(p1.X, p2.Y, p1.Z, 1, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p1.Y, p2.Z, 0, 0)); vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 1, 0)); } private void BuildFaceHorizontal(List<VertexPositionTexture> vertices, Vector3 p1, Vector3 p2) { vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p1.Y, p1.Z, 1, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0)); vertices.Add(BuildVertex(p1.X, p1.Y, p1.Z, 0, 1)); vertices.Add(BuildVertex(p2.X, p2.Y, p2.Z, 1, 0)); vertices.Add(BuildVertex(p1.X, p1.Y, p2.Z, 0, 0)); } private VertexPositionTexture BuildVertex(float x, float y, float z, float u, float v) { return new VertexPositionTexture(new Vector3(x, y, z), new Vector2(u, v)); } public void Draw(BasicEffect effect) { foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); device.SetVertexBuffer(cubeVertexBuffer); device.DrawPrimitives(PrimitiveType.TriangleList, 0, cubeVertexBuffer.VertexCount / 3); } } } The following class is a list that draws the cubes. public class DrawableList<T> : DrawableGameComponent { private BasicEffect effect; private ThirdPersonCam camera; private class Entity { public Vector3 Position { get; set; } public Matrix Orientation { get; set; } public Texture2D Texture { get; set; } } private Cube cube; private List<Entity> entities = new List<Entity>(); public DrawableList(Game game, ThirdPersonCam camera, BasicEffect effect) : base(game) { this.effect = effect; cube = new Cube(game.GraphicsDevice); this.camera = camera; } public void Add(Vector3 position, Matrix orientation, Texture2D texture) { entities.Add(new Entity() { Position = position, Orientation = orientation, Texture = texture }); } public override void Draw(GameTime gameTime) { foreach (var item in entities) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; effect.Texture = item.Texture; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(item.Position); effect.World = center * scale * translate; effect.View = camera.view; effect.Projection = camera.proj; effect.FogEnabled = true; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); } base.Draw(gameTime); } } } There are probably many reasons that my fps is so slow, but I can't seem to figure out how to fix it. I've looked at techcraft as well, but what I have is too specific to what I want the outcome to be to just rewrite everything from scratch

    Read the article

  • Navigate Quickly with JustCode and Ctrl+Click

    Ctrl + Click is a widely used shortcut for Go To Definition in many development environments but not in Visual Studio. We, the JustCode team, find it really useful so we added it to Visual Studio. But we didn't stop there - we improved it even further. Read on to find the details. With JustCode you get an enhanced Go To Definition. By default you can execute it in the Visual Studio editor using one of the following shortcuts: Middle Click, Ctrl+Left Click, F12, Ctrl+Enter, Ctrl+B. The first usage of this feature is not much different from the default Visual Studio Go To Definition command use it where a member, type, method, property, etc is used to navigate to the definition of that item. For example, if you have this method:         public void Start()         {             lion = new Lion();             lion.Roar();         } If you hold Ctrl and click on the usage of the lion you will go to the lion member definition. If you hold Ctrl and click on the Lion you will go to the Lion class definition. What we added is the ability to easily find all the usages of the item you just navigated to. For example:     public class Lion     {         public void Roar()         {             Console.WriteLine("Rhaaaar");         }     }   If you hold Ctrl and click on the Lion definition you will see all the usages of the Lion type; if you click on the Roar method definition you will see all the usages of the Roar method: And if there is only one usage you will get automatically to that usage. In the examples I use C#, but it works also in VB.NET, JavaScript, ASP.NET and XAML. Why we like this feature? Let me first start with how the Ctrl+Click (or Go To Definition command) is used. We noticed that developers use it especially in what we call "code browsing sessions". In simple words this is when you browse around the code looking for a bug, just reading the code or searching for something. Sounds familiar? In our experience when you go to the definition of some item you often want to know more about it and the first thing you need is to find its usages. With JustCode this is just one click away. Why Ctrl+Click/Middle Click over F12/Ctrl+Enter/Ctrl+B? Actually you can use all of them. But during these "code browsing sessions" we noticed that most developers use the mouse. So the mouse is already in use and pressing Ctrl+Click (or the Middle Click) is so natural. During heavy coding sessions or if you are a keyboard type developer F12 (or any of the other keyboard shortcuts) is the key. We really use heavily this feature not only in our team but in the whole company. It saves us a bit of time many times a day. And it adds up. We hope you will like it too. Your feedback is more than welcome for us. P.S. If you dont want JustCode to capture the Ctrl+Click and the Middle Click in the editor, you can change that in JustCode->Options->General in the Navigation group. Keyboard shortcuts can be reassigned using the Visual Studio keyboard shortcuts editor.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • My own personal use of Oracle Linux

    - by wcoekaer
    It always is easier to explain something with examples... Many people still don't seem to understand some of the convenient things around using Oracle Linux and since I personally (surprise!) use it at home, let me give you an idea. I have quite a few servers at home and I also have 2 hosted servers with a hosted provider. The servers at home I use mostly to play with random Linux related things, or with Oracle VM or just try out various new Oracle products to learn more. I like the technology, it's like a hobby really. To be able to have a good installation experience and use an officially certified Linux distribution and not waste time trying to find the right libraries, I, of course, use Oracle Linux. Now, at least I can get a copy of Oracle Linux for free (even if I was not working for Oracle) and I can/could use that on as many servers at home (or at my company if I worked elsewhere) for testing, development and production. I just go to http://edelivery.oracle.com/linux and download the version(s) I want and off I go. Now, I also have the right (and not because I am an employee) to take those images and put them on my own server and give them to someone else, I in fact, just recently set up my own mirror on my own hosted server. I don't have to remove oracle-logos, I don't have to rebuild the ISO images, I don't have to recompile anything, I can just put the whole binary distribution on my own server without contract. Perfectly free to do so. Of course the source code of all of this is there, I have a copy of the UEK code at home, just cloned from https://oss.oracle.com/git/?p=linux-2.6-unbreakable.git. And as you can see, the entire changelog, checkins, merges from Linus's tree, complete overview of everything that got changed from kernel to kernel, from patch to patch, errata to errata. No obfuscating, no tar balls and spending time with diff, or go read bug reports to find out what changed (seems silly to me). Some of my servers are on the external network and I need to be current with security errata, but guess what, no problem, my servers are hooked up to http://public-yum.oracle.com which is open, free, and completely up to date, in a consistent, reliable way with any errata, security or bugfix. So I have nothing to worry about. Also, not because I am an employee. Anyone can. And, with this, I also can, and have, set up my own mirror site that hosts these RPMs. both binary and source rpms. Because I am free to get them and distribute them. I am quite capable of supporting my servers on my own, so I don't need to rely on the support organization so I don't need to have a support subscription :-). So I don't need to pay. Neither would you, at least not with Oracle Linux. Another cool thing. The hosted servers came (unfortunately) with Centos installed. While Centos works just fine as is, I tend to prefer to be current with my security errata(reliably) and I prefer to just maintain one yum repository instead of 2, I converted them over to Oracle Linux as well (in place) so they happily receive and use the exact same RPMs. Since Oracle Linux is exactly the same from a user/application point of view as RHEL, including files like /etc/redhat-release and no changes from .el. to .centos. I know I have nothing to worry about installing one of the RHEL applications. So, OL everywhere makes my life a lot easier and why not... Next! Since I run Oracle VM and I have -tons- of VM's on my machines, in some cases on my big WOPR box I have 15-20 VMs running. Well, no problem, OL is free and I don't have to worry about counting the number of VMs, whether it's 1, or 4, or more than 10 ... like some other alternatives started doing... and finally :) I like to try out new stuff, not 3 year old stuff. So with UEK2 as part of OL6 (and 6.3 in particular) I can play with a 3.0.x based kernel and it just installs and runs perfectly clean with OL6, so quite current stuff in an environment that I know works, no need to toy around with an unsupported pre-alpha upstream distribution with libraries and versions that are not compatible with production software (I have nothing against ubuntu or fedora or opensuse... just not what I can rely on or use for what I need, and I don't need a desktop). pretty compelling. I say... and again, it doesn't matter that I work for Oracle, if I was working elsewhere, or not at all, all of the above would still apply. Student, teacher, developer, whatever. contrast this with $349 for 2 sockets and oneguest and selfsupport per year to even just get the software bits.

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • How do I pass vertex and color positions to OpenGL shaders?

    - by smoth190
    I've been trying to get this to work for the past two days, telling myself I wouldn't ask for help. I think you can see where that got me... I thought I'd try my hand at a little OpenGL, because DirectX is complex and depressing. I picked OpenGL 3.x, because even with my OpenGL 4 graphics card, all my friends don't have that, and I like to let them use my programs. There aren't really any great tutorials for OpenGL 3, most are just "type this and this will happen--the end". I'm trying to just draw a simple triangle, and so far, all I have is a blank screen with my clear color (when I set the draw type to GL_POINTS I just get a black dot). I have no idea what the problem is, so I'll just slap down the code: Here is the function that creates the triangle: void CEntityRenderable::CreateBuffers() { m_vertices = new Vertex3D[3]; m_vertexCount = 3; m_vertices[0].x = -1.0f; m_vertices[0].y = -1.0f; m_vertices[0].z = -5.0f; m_vertices[0].r = 1.0f; m_vertices[0].g = 0.0f; m_vertices[0].b = 0.0f; m_vertices[0].a = 1.0f; m_vertices[1].x = 1.0f; m_vertices[1].y = -1.0f; m_vertices[1].z = -5.0f; m_vertices[1].r = 1.0f; m_vertices[1].g = 0.0f; m_vertices[1].b = 0.0f; m_vertices[1].a = 1.0f; m_vertices[2].x = 0.0f; m_vertices[2].y = 1.0f; m_vertices[2].z = -5.0f; m_vertices[2].r = 1.0f; m_vertices[2].g = 0.0f; m_vertices[2].b = 0.0f; m_vertices[2].a = 1.0f; //Create the VAO glGenVertexArrays(1, &m_vaoID); //Bind the VAO glBindVertexArray(m_vaoID); //Create a vertex buffer glGenBuffers(1, &m_vboID); //Bind the buffer glBindBuffer(GL_ARRAY_BUFFER, m_vboID); //Set the buffers data glBufferData(GL_ARRAY_BUFFER, sizeof(m_vertices), m_vertices, GL_STATIC_DRAW); //Set its usage glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex3D), 0); glVertexAttribPointer(1, 4, GL_FLOAT, GL_TRUE, sizeof(Vertex3D), (void*)(3*sizeof(float))); //Enable glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); //Check for errors if(glGetError() != GL_NO_ERROR) { Error("Failed to create VBO: %s", gluErrorString(glGetError())); } //Unbind... glBindVertexArray(0); } The Vertex3D struct is as such... struct Vertex3D { Vertex3D() : x(0), y(0), z(0), r(0), g(0), b(0), a(1) {} float x, y, z; float r, g, b, a; }; And finally the render function: void CEntityRenderable::RenderEntity() { //Render... glBindVertexArray(m_vaoID); //Use our attribs glDrawArrays(GL_POINTS, 0, m_vertexCount); glBindVertexArray(0); //unbind OnRender(); } (And yes, I am binding and unbinding the shader. That is just in a different place) I think my problem is that I haven't fully wrapped my mind around this whole VertexAttribArray thing (the only thing I like better in DirectX was input layouts D:). This is my vertex shader: #version 330 //Matrices uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; //In values layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; //Out values out vec3 frag_color; //Main shader void main(void) { //Position in world gl_Position = vec4(position, 1.0); //gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(in_Position, 1.0); //No color changes frag_color = color; } As you can see, I've disable the matrices, because that just makes debugging this thing so much harder. I tried to debug using glslDevil, but my program just crashes right before the shaders are created... so I gave up with that. This is my first shot at OpenGL since the good old days of LWJGL, but that was when I didn't even know what a shader was. Thanks for your help :)

    Read the article

  • Navigation in a #WP7 application with MVVM Light

    - by Laurent Bugnion
    In MVVM applications, it can be a bit of a challenge to send instructions to the view (for example a page) from a viewmodel. Thankfully, we have good tools at our disposal to help with that. In his excellent series “MVVM Light Toolkit soup to nuts”, Jesse Liberty proposes one approach using the MVVM Light messaging infrastructure. While this works fine, I would like to show here another approach using what I call a “view service”, i.e. an abstracted service that is invoked from the viewmodel, and implemented on the view. Multiple kinds of view services In fact, I use view services quite often, and even started standardizing them for the Windows Phone 7 applications I work on. If there is interest, I will be happy to show other such view services, for example Animation services, responsible to start/stop animations on the view. Dialog service, in charge of displaying messages to the user and gathering feedback. Navigation service, in charge of navigating to a given page directly from the viewmodel. In this article, I will concentrate on the navigation service. The INavigationService interface In most WP7 apps, the navigation service is used in quite a straightforward way. We want to: Navigate to a given URI. Go back. Be notified when a navigation is taking place, and be able to cancel. The INavigationService interface is quite simple indeed: public interface INavigationService { event NavigatingCancelEventHandler Navigating; void NavigateTo(Uri pageUri); void GoBack(); } Obviously, this interface can be extended if necessary, but in most of the apps I worked on, I found that this covers my needs. The NavigationService class It is possible to nicely pack the navigation service into its own class. To do this, we need to remember that all the PhoneApplicationPage instances use the same instance of the navigation service, exposed through their NavigationService property. In fact, in a WP7 application, it is the main frame (RootFrame, of type PhoneApplicationFrame) that is responsible for this task. So, our implementation of the NavigationService class can leverage this. First the class will grab the PhoneApplicationFrame and store a reference to it. Also, it registers a handler for the Navigating event, and forwards the event to the listening viewmodels (if any). Then, the NavigateTo and the GoBack methods are implemented. They are quite simple, because they are in fact just a gateway to the PhoneApplicationFrame. The whole class is as follows: public class NavigationService : INavigationService { private PhoneApplicationFrame _mainFrame; public event NavigatingCancelEventHandler Navigating; public void NavigateTo(Uri pageUri) { if (EnsureMainFrame()) { _mainFrame.Navigate(pageUri); } } public void GoBack() { if (EnsureMainFrame() && _mainFrame.CanGoBack) { _mainFrame.GoBack(); } } private bool EnsureMainFrame() { if (_mainFrame != null) { return true; } _mainFrame = Application.Current.RootVisual as PhoneApplicationFrame; if (_mainFrame != null) { // Could be null if the app runs inside a design tool _mainFrame.Navigating += (s, e) => { if (Navigating != null) { Navigating(s, e); } }; return true; } return false; } } Exposing URIs I find that it is a good practice to expose each page’s URI as a constant. In MVVM Light applications, a good place to do that is the ViewModelLocator, which already acts like a central point of setup for the views and their viewmodels. Note that in some cases, it is necessary to expose the URL as a string, for instance when a query string needs to be passed to the view. So for example we could have: public static readonly Uri MainPageUri = new Uri("/MainPage.xaml", UriKind.Relative); public const string AnotherPageUrl = "/AnotherPage.xaml?param1={0}&param2={1}"; Creating and using the NavigationService Normally, we only need one instance of the NavigationService class. In cases where you use an IOC container, it is easy to simply register a singleton instance. For example, I am using a modified version of a super simple IOC container, and so I can register the navigation service as follows: SimpleIoc.Register<INavigationService, NavigationService>(); Then, it can be resolved where needed with: SimpleIoc.Resolve<INavigationService>(); Or (more frequently), I simply declare a parameter on the viewmodel constructor of type INavigationService and let the IOC container do its magic and inject the instance of the NavigationService when the viewmodel is created. On supported platforms (for example Silverlight 4), it is also possible to use MEF. Or, of course, we can simply instantiate the NavigationService in the ViewModelLocator, and pass this instance as a parameter of the viewmodels’ constructor, injected as a property, etc… Once the instance has been passed to the viewmodel, it can be used, for example with: NavigationService.NavigateTo(ViewModelLocator.ComparisonPageUri); Testing Thanks to the INavigationService interface, navigation can be mocked and tested when the viewmodel is put under unit test. Simply implement and inject a mock class, and assert that the methods are called as they should by the viewmodel. Conclusion As usual, there are multiple ways to code a solution answering your needs. I find that view services are a really neat way to delegate view-specific responsibilities such as animation, dialogs and of course navigation to other classes through an abstracted interface. In some cases, such as the NavigationService class exposed here, it is even possible to standardize the implementation and pack it in a class library for reuse. I hope that this sample is useful! Happy coding. Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Install Oracle Enterprise Linux 5 on VMWare

    - by TUFEKCIOGLU,FATIH
    In this article, we will try to install Install Oracle Enterprise Linux 5 on VMWare. To make the installation easier, I will show the screenshots of the whole steps. 1.   2.   3.   4.   5.   6. 7. 8.   9.   10.   11.   12.   13.   14.   15.   16.   17.   18.   19.   20.   21.   22.   23.   24.   25.   26.   27.   28.   29.   30.   31.   32.   33.   34.   35.   36.   37.   38.   39.   40.   41.   42.   43.   44.   45.   46.   47.   48.   49.   50.   51.   52.   53.   54.   55.   56.   57.   58.   59.   60.   61.   62.   The installation is completed, Thanks, Fatih Tufekcioglu  

    Read the article

  • How to fix Ubuntu 12.04.3 boot to black screen full of errors in white text, after upgrading on dell inspiron 1501

    - by Ibuntu
    I am running a Dell Inspiron 1501 I use Linux only. No Microsoft or Apple operating systems (or really anything closed-source). I've only been using Linux for a little over a year but I'm starting to gain a comfortable level of familiarity with the system and terminology. I've been having some issues with Quantel Quetzal and Raring Ringtail, especially with older hardware, so I opted to install Ubuntu 12.04.3 Precise Pangolin on the Inspiron 1501. I checked my MD5 sum after downloading my ISO and all was good. I have in fact used this iso/dvd to install Precise Pangolin successfully on a few other systems (some of which are even older than this laptop). Install goes fine. The wireless card doesn't work out of the box but this is a known issue which is fairly easy to fix. So, first thing I did was open up a terminal and run sudo apt-get update && sudo apt-get upgrade which, part way through, crashed (I assume lightdm and possibly X) and took me to a black screen filled with white lines of text that were either errors or just the ouputs of commands. The reason I say that is because I was unable to gleam any useful information from the output on the screen. I did take a picture however and will post a link. After that, every time I boot the system it goes right to that black screen posting all the error messages or output in white text. I never get a purple Ubuntu splash, so from what I can tell after reading this wiki article: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen That means that after the kernel is selected, it is unable to correctly implement the settings it needs. If the purple splash never shows, the frame buffer was never set correctly right? This leads me to believe that it could be a kernel issue? The wiki suggested to try and pinpoint the issue by rolling back kernels until I find one that works. Is this my best option? I think I'm going to give it a try anyways and will let everyone know if I am able to solve the issue this way. I have since done a few reinstalls and some trouble-shooting including a couple hours scouring the net for anyone with any kind of similar issue. Most of the issues I could find involved getting a black screen after login and none of them said anything about any information output on this black screen. My reinstalls have taught me that there is no issue updating, but as soon as I run sudo apt-get upgrade my system goes to the black screen and every time I boot it up it does the same thing. The only way to fix is by reinstall. I never get any ability to log in. After a hard power off to the laptop (because I cannot use ctrl+alt+del to reboot) when it boots again it goes to the grub boot menu and I can select between regular boot, recovery mode and the two memtest options. I never tried the memtest options but the other two both lead to the same black screen. Some people having a black/blank screen issue claim to have fixed it by using 12.10 or 13.04 but I believe they were having a different issue where they got a black/blank screen after logging in. I think I will still give these images a try, but mostly figured I would just wait another day or two for 13.10. Other things I figured I would try from the following three articles: After logging in, there's a black screen and my cursor, nothing else! in Ubuntu 12.10 Black Screen on Login After Upgrading to 12.04 I can't get to the login screen include opening a terminal using ctrl+alt+f1 and trying a variety of reseting unity, x settings, lightdm (or switching to gdm); but I doubt this will work or that I will even be able to access a terminal. I'm pretty sure the whole system is stuck after it loads the last line on the black screen. I will try these things and post more information when I have. Hopefully someone has an idea in the meantime and I will keep checking back trying to find a solution. Thank you. Here are 3 different pictures of the error message. I had to take with my phone: http://ubuntuone.com/album/0TBBkxmVajJIQQtoN9mVdN

    Read the article

  • Access Control Service V2 and Facebook Integration

    - by Your DisplayName here!
    I haven’t been blogging about ACS2 in the past because it was not released and I was kinda busy with other stuff. Needless to say I spent quite some time with ACS2 already (both in customer situations as well as in the classroom and at conferences). ACS2 rocks! It’s IMHO the most interesting and useful (and most unique) part of the whole Azure offering! For my talk at VSLive yesterday, I played a little with the Facebook integration. See Steve’s post on the general setup. One claim that you get back from Facebook is an access token. This token can be used to directly talk to Facebook and query additional properties about the user. Which properties you have access to depends on which authorization your Facebook app requests. You can specify this in the identity provider registration page for Facebook in ACS2. In my example I added access to the home town property of the user. Once you have the access token from ACS you can use e.g. the Facebook SDK from Codeplex (also available via NuGet) to talk to the Facebook API. In my sample I used the WIF ClaimsAuthenticationManager to add the additional home town claim. This is not necessarily how you would do it in a “real” app. Depends ;) The code looks like this (sample code!): public class ClaimsTransformer : ClaimsAuthenticationManager {     public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal)     {         if (!incomingPrincipal.Identity.IsAuthenticated)         {             return base.Authenticate(resourceName, incomingPrincipal);         }         string accessToken;         if (incomingPrincipal.TryGetClaimValue( "http://www.facebook.com/claims/AccessToken", out accessToken))         {             try             {                 var home = GetFacebookHometown(accessToken);                 if (!string.IsNullOrWhiteSpace(home))                 {                     incomingPrincipal.Identities[0].Claims.Add( new Claim("http://www.facebook.com/claims/HomeTown", home));                 }             }             catch { }         }         return incomingPrincipal;     }      private string GetFacebookHometown(string token)     {         var client = new FacebookClient(token);         dynamic parameters = new ExpandoObject();         parameters.fields = "hometown";         dynamic result = client.Get("me", parameters);         return result.hometown.name;     } }  

    Read the article

  • The sign of a true manager is delegation (C# style)

    - by MarkPearl
    Today I thought I would write a bit about delegates in C#. Up till recently I have managed to side step any real understanding of what delegates do and why they are useful – I mean, I know roughly what they do and have used them a lot, but I have never really got down dirty with them and mucked about. Recently however with my renewed interest in Silverlight delegates came up again as a possible solution to a particular problem, and suddenly I found myself opening a bland little console application to just see exactly how far I could take delegates with my limited knowledge. So, let’s first look at the MSDN definition of delegates… A delegate declaration defines a reference type that can be used to encapsulate a method with a specific signature. A delegate instance encapsulates a static or an instance method. Delegates are roughly similar to function pointers in C++; however, delegates are type-safe and secure. Well, don’t you love MSDN for such a useful definition. I must give it credit though… later on it really explains it a bit better by saying “A delegate lets you pass a function as a parameter. The type safety of delegates requires the function you pass as a delegate to have the same signature as the delegate declaration.” A little more reading up on delegates mentions that delegates are similar to interfaces in that they enable the separation of specification and implementation. A delegate declares a single method, while an interface declares a group of methods. So enough reading - lets look at some code and see a basic example of a delegate… Let’s assume we have a console application with a simple delegate declared called AdjustValue like below… class Program { private delegate int AdjustValue(int val); static void Main(string[] args) { } } In a sense, all we have said is that we will be creating one or more methods that follow the same pattern as AdjustValue – i.e. they will take one input value of type int and return an integer. We could then expand our code to have various methods that match the structure of our delegate AdjustValue (remember the structure is int xxx (int xxx)) class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { } }  Above I have expanded my project to have two methods, one called Dbl and the other AlwaysOne. Dbl always returns double the input val and AlwaysOne always returns 1. I could now declare a variable and assign it to be one of those functions, like the following… class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } } In this instance I have declared an instance of the AdjustValue delegate called myDelegate; I have then told myDelegate to point to the method Dbl, and then called myDelegate(1). What would the result be? Yes, in this instance it would be exactly the same as me calling the following code… static void Main(string[] args) { Console.WriteLine(Dbl(1).ToString()); Console.ReadLine(); }   So why all the extra work for delegates when we could just do what we did above and call the method directly? Well… that separation of specification to implementation comes to mind. So, this all seems pretty simple. Let’s take a slightly more complicated variation to the console application. Assume that my project is the same as the one previously except that my main method is adjusted as follows… static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; myDelegate = AlwaysOne; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } What would happen in this scenario? Quite simply “1” would be written to the console, the reason being that myDelegate was last pointing to the AlwaysOne method before it was called. Make sense? In a way, the myDelegate is a variable method that can be swapped and changed when needed. Let’s make the code a little more confusing by using a delegate in the declaration of another delegate as shown below… class Program { private delegate int AdjustValue(InputValue val); private delegate int InputValue(); private static int Dbl(InputValue val) { return val()*2; } private static int GetInputVal() { Console.WriteLine("Enter a whole number : "); return Convert.ToInt32(Console.ReadLine()); } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(GetInputVal).ToString()); Console.ReadLine(); } }   Now it gets really interesting because it looks like we have passed a method into a function in the main method by declaring… Console.WriteLine(myDelegate(GetInputVal).ToString()); So, what it the output? Well, try take a guess on what will happen – then copy the code and see if you got it right. Well that brings me to the end of this short explanation of Delegates. Hopefully it made sense!

    Read the article

  • How to make creating viewmodels at runtime less painfull

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painfull. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genious since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use an DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an indepth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • How do I dig myself out of this DEEP hole? [closed]

    - by user74847
    I may be a bit bias in the way i word this but any opinions and suggestions are welcome. I should start by saying i have a MSc in CS and a degree in new media +6 years expereince and im probably around a middleweight developer. I started a web development company with my friend from uni a year ago, there was a 4 month gap in the middle where i went miles away work on a big project. Ive since returned and picked up where we left off. A year on though i find im still staying up til 5am and getting up at 9 sometimes 2-3 days without sleep. While i was away i was working 9-5 and struggling to keep up with doing stuff for my clients 8 hours ahead, after work, so things stagnated. We currently have about 12 active projects, with one other part time developer and a full time freelancer who is dealing with one of our major projects. I am solely responsible for concurrently developing 2 big sites similar to gumtree in functionality, at the same time as about 5-6+ small WordPress based 5-10page sites. a lot of the content isnt in yet or the client is delaying so i chop and change project every other day which does my head in. Is it reasonable to expect myself to remember the intricate details of each project when i come back to it a week later? and remember the details of a task which hasnt been written down? my business partner seems to think so. or am i just forgetful? Im particularly bad at estimating timescales which doesnt help, added to that a lot of the technologies im am using are new to me (a magento site took weeks to theme rather than days and was full of bugs, even after 1000's of google searches and hours reading forums) im still trying to learn and find the best CMS for us to use and getting my head around the likes of Bootstrap and jquery, Cpanel / Linux (we just got a blank vps for me to set up with no experience) even installing an SSL certificate caused everyone's mail clients to go down which was more stress for me to sort out. I find the pressure of the workload and timescales and trying to learn this stuff so fast is beginning to turn me against my career path. The fact that i never seem to get anything done really winds up my business partner and iv come to associate him with the stress and pain of the whole situation especially when I get berated or a look that says "oh you retard" when I forget something. Even today i spent hours learning how a particular themeforest theme worked with wordpress and how i could twist it to work for our partiuclar needs, on the surface had done no work, that triggered a 30 minute tirade of anger and stress and questioning what i had done from my business partner. had i taken too long to work on that? shoudl i have done it in 2 hours instead of 6? i told him i would take 2 hours. i was wrong. I feel like im running myself into the ground. My sleeping pattern has got so bad that when im working im half asleep and making mistakes, my eyes are constantly purple underneath, i literally fall asleep at my desk, its affecting my social life too, ive not slept more than lightly for the last year and grind through impossible code puzzles in my half sleep wich keeps me awake, when im already exhausted. plus the work is rushed and buggy when it does get done so drags on into the next project. I also procrastinate quite badly, pacing the livingroom, looking out the window when Im alone for three days straight in the flat and start to get cabin fever which means i do even less work and the negative feedback loop continues. I get told im the only one with the problem when i say that i cant work from home any more, and examples of other freelancers get brought up. an office wouldnt bring any extra cash in to the company but im convinced having that moving more than 2 meters away from my bed to go to "work" would get me working, at the moment i feel guilty like i should be working 24-7. It is important that we do all this work to raise enough cash to get our business to the next level but every month still feels like a struggle to pay the rent (there is about £20K coming in by Jan) and i have to borrow money from friends often to buy food or get a taxi to a meeting, so it is vital the money keeps coming in. (im also 20 mins late for nearly all meetings but thats a different issue) have you experienced anything similar? how can i deal with the issues ive raised? is it realistic to develop 10 sites at once? how can i improve my relationship with my business partner? do you struggle to work at home? how do you deal with that? i think if i dont get my life on track by feb i will seriously consider giving it all up, but that seems like such a waste. any ideas!!? i need help! Thanks.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • StreamInsight 2.1 Released

    - by Roman Schindlauer
    The wait is over—we are pleased to announce the release of StreamInsight 2.1. Since the release of version 1.2, we have heard your feedbacks and suggestions and based on that we have come up with a whole new set of features. Here are some of the highlights: A New Programming Model – A more clear and consistent object model, eliminating the need for complex input and output adapters (though they are still completely supported). This new model allows you to provision, name, and manage data sources and sinks in the StreamInsight server. Tight integration with Reactive Framework (Rx) – You can write reactive queries hosted inside StreamInsight as well as compose temporal queries on reactive objects. High Availability – Check-pointing over temporal streams and multiple processes with shared computation. Here is how simple coding can be with the 2.1 Programming Model: class Program {     static void Main(string[] args)     {         using (Server server = Server.Create("Default"))         {             // Create an app             Application app = server.CreateApplication("app");             // Define a simple observable which generates an integer every second             var source = app.DefineObservable(() =>                 Observable.Interval(TimeSpan.FromSeconds(1)));             // Define a sink.             var sink = app.DefineObserver(() =>                 Observer.Create<long>(x => Console.WriteLine(x)));             // Define a query to filter the events             var query = from e in source                         where e % 2 == 0                         select e;             // Bind the query to the sink and create a runnable process             using (IDisposable proc = query.Bind(sink).Run("MyProcess"))             {                 Console.WriteLine("Press a key to dispose the process...");                 Console.ReadKey();             }         }     } }   That’s how easily you can define a source, sink and compose a query and run it. Note that we did not replace the existing APIs, they co-exist with the new surface. Stay tuned, you will see a series of articles coming out over the next few weeks about the new features and how to use them. Come and grab it from our download center page and let us know what you think! You can find the updated MSDN documentation here, and we would appreciate if you could provide feedback to the docs as well—best via email to [email protected]. Moreover, we updated our samples to demonstrate the new programming surface. Regards, The StreamInsight Team

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >