Search Results

Search found 6619 results on 265 pages for 'digital logic'.

Page 188/265 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Class Design -- Multiple Calls from One Method or One Call from Multiple Methods?

    - by Andrew
    I've been working on some code recently that interfaces with a CMS we use and it's presented me with a question on class design that I think is applicable in a number of situations. Essentially, what I am doing is extracting information from the CMS and transforming this information into objects that I can use programatically for other purposes. This consists of two steps: Retrieve the data from the CMS (we have a DAL that I use, so this is essentially just specifying what data from the CMS I want--no connection logic or anything like that) Map the parsed data to my own [C#] objects There are basically two ways I can approach this: One call from multiple methods public void MainMethodWhereIDoStuff() { IEnumerable<MyObject> myObjects = GetMyObjects(); // Do other stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); List<MyObject> mappedObjects = new List<MyObject>(); // do stuff to map the CmsDataItems to MyObjects return mappedObjects; } private static IEnumerable<CmsDataItem> GetCmsDataItems() { List<CmsDataItem> cmsDataItems = new List<CmsDataItem>(); // do stuff to get the CmsDataItems I want return cmsDataItems; } Multiple calls from one method public void MainMethodWhereIDoStuff() { IEnumerable<CmsDataItem> cmsDataItems = GetCmsDataItems(); IEnumerable<MyObject> myObjects = GetMyObjects(cmsDataItems); // do stuff with myObjects } private static IEnumerable<MyObject> GetMyObjects(IEnumerable<CmsDataItem> itemsToMap) { // ... } private static IEnumerable<CmsDataItem> GetCmsDataItems() { // ... } I am tempted to say that the latter is better than the former, as GetMyObjects does not depend on GetCmsDataItems, and it is explicit in the calling method the steps that are executed to retrieve the objects (I'm concerned that the first approach is kind of an object-oriented version of spaghetti code). On the other hand, the two helper methods are never going to be used outside of the class, so I'm not sure if it really matters whether one depends on the other. Furthermore, I like the fact that in the first approach the objects can be retrieved from one line-- most likely anyone working with the main method doesn't care how the objects are retrieved, they just need to retrieve the objects, and the "daisy chained" helper methods hide the exact steps needed to retrieve them (in practice, I actually have a few more methods but am still able to retrieve the object collection I want in one line). Is one of these methods right and the other wrong? Or is it simply a matter of preference or context dependent?

    Read the article

  • Impressions of my ASUS eee slate EP121 - Dual core 4GB, 64GB SSD

    - by tonyrogerson
    This thing is lovely, very nice bluetooth keyboard that has nice feedback on the keypress, there is no mouse but you can use the stylus or get yourself a bluetooth mouse, me, I've opted for a Microsoft ARC mouse which is a delight to use, the USB doors are a pain to open for the first time if like me you don't have any finger nails. It came as a suprise that the slate shows four processors, Dual Core with multi-threading, I didn't really look at the processor I was more interested in the amount of memory and the SSD; you don't get the full 4GB even with the 64 bit version of Windows 7 installed (which I immediately upgraded to Ultimate through my MSDN subscription). The box is extremely responsive - extremely, it loads Winword in literally a second. I've got office 2010 and onenote 2010 on there now; one problem is that on applying all (43) windows updates since the upgrade the machine is still sat on step 3 of 3 on the start up configuring updates screen after about an hour, you can't turn this machine off without using a paper clip to reset it and as I have just found you need a paper clip :). Installing Windows 7 SP1 was effortless. One of the first things I did on it was to reduce the size of the font, by default its set at 125%, my eye sight is ok :) so I've set that back down. Amazon Kindle for the PC works really well, plenty of text on the screen when viewed portrait, the case it comes with also allows the slate to stand up in various positions - portrait, horizontal - seems stable enough. The wireless works well, seems to have a better signal than my other two laptop machines which is good news. The gadget passed the pose test at work :). I use offline files to keep a copy of all my work stuff locally, I'm not sure what it is, well, its probably my server but whenever I try and sync it runs for a couple of minutes then fails with network name no longer contactable, funnily enough its fine from my big laptop so I can only guess this may be a driver type issue on the EP121 itself - very odd and very annoying. I do a lot of presenting and need to plug into a VGA project because most sites that's all that is offered, the EP121 has a mini-hdmi output which is great except for this scenario, hdmi is digital, vga is analogue, you will struggle to find a cost effective solution, I found HDFury and also a device HP do, however, a better solution appears to be getting a USB graphics adapter for instance the one I've ordered is the ClimaxDigital USB 2.0 to DVI,VGA or HDMI Adaptor which gives everything I need - VGA and DVI output and great resolution as well - ok, so fingers crossed because I'm presenting next Wednesday in Edinburgh and not taking my 300kg lenovo w700 (I'm sure my back just sighed in relief) - it certainly works really well on my LED TV, the install was simple - it just works! One of the several reasons for buying this piece of kit was to use it on my LED TV to remote into my main machine to check stuff whilst sat in my living room, also to watch webcasts and lecture videos in comfort away from my office, because of the wireless speed and limitation I'm opting for a USB network adapter from Belkin - that will also allow me to take advantage of my home gigabit network, there are only 2 usb ports on the slate so I'm going to knock up a hub so connecting it in is straight forward and simple, I'm also going to purchase a second power supply so I don't have to faff about with that either.I now have the developer x64 edition of SQL Server 2008 R2, yes everything :) - about 16GB left to play with on the machine now but that will be fine, I'll put AdventureWorks on there so I can play and demo stuff which is all I'm after from this, my development machine is significantly more powerful and meets my storage needs too.Travel test this weekend and next week, I'm in Dundee for my final exam for the masters degree.

    Read the article

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • Is ASP.NET MVC completely (and exclusively) based on conventions?

    - by Mike Valeriano
    --TL;DR Is there a "Hello World!" ASP.NET MVC tutorial out there that doesn't rely on conventions and "stock" projects? Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single "hello_world.asp" file or something (like in PHP)? Am I completely mistaken and I should be looking somewhere else, maybe this? I'm interested in the MVC framework, not Web Forms --Background I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my "desktop" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want but... not quite. Learning PHP was easy - and right form the start I could just create a "hello_world.php" file and just do something like this for immediate results: <!-- file: hello_world.php --> <?php> echo "Hello World!"; <?> But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the "Empty" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can in fact start from scratch, creating the project structure myself. It is not the case with PHP: I can choose from a plethora of different MVC frameworks I can just create my own framework I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)

    Read the article

  • Oracle is #1 in Life Sciences!

    - by Michael Snow
    Guest post today by: John Klinke, Senior Principal Product Manager, Oracle WebCenter Content 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Based on the announcement last week at EMC World about Documentum for Life Sciences, it looks like EMC is starting to have regrets about pulling out of the life sciences space over the last few years. Certainly their content management customers and partners in life sciences have noticed their retreat. Many of them are now talking to us about WebCenter Content since they’ve seen the writing on the wall regarding Documentum’s decline, including falling revenue, shrinking investment, departure of key executives, and EMC’s auditing of existing customers. While EMC has been neglecting the life sciences industry over the last few years, Oracle has been increasing its investment and commitment by providing best-of-breed solutions to enable pharmaceutical, medical device, biotech and CRO companies to improve productivity and drive innovation. As a result, according to IDC Health Insights, Oracle is #1 in life sciences. From research and development through clinical development and manufacturing to sales and marketing, Oracle provides the solutions that life sciences companies depend on to accelerate R&D, expedite clinical trials, and speed time to market. Specifically for Oracle WebCenter, our life sciences business is booming thanks to our comprehensive offerings led by Oracle WebCenter Content, our 21 CFR Part 11 compliant enterprise content management platform. Unlike Documentum, WebCenter Content is all about keeping the cost of ownership low - through simplicity, flexibility, and out-of-the-box integrations. WebCenter Content is a single, comprehensive ECM platform that can handle all your content management needs, from controlled documents to digital asset management, records management and document imaging and capture. And it is much more flexible, letting you do configuration changes instead of customizations to meet your business needs. It also saves you money by being pre-integrated with the rest of the Oracle Fusion Middleware technology stack and with leading enterprise applications like Siebel (including Siebel CTMS), Primavera, E-Business Suite, JD Edwards and PeopleSoft. So if you think EMC’s announcement last week was too little and too late, I’m happy to report that Oracle is here to help. Back in October, we announced our Move Off Documentum offer, which provides a 100% trade-in credit for your Documentum licenses when you purchase Oracle WebCenter, and the good news is, this offer is still available for a limited time. So stop maintaining Documentum and start innovating with Oracle WebCenter. For more details see www.oracle.com/moveoff/documentum.

    Read the article

  • Creating predefinied camera views - How do I move the camera to make sense while using Controller?

    - by Deukalion
    I'm trying to understand 3D but the one thing I can't seem to understand is the Camera. Right now I'm rendering four 3D Cubes with textures and I set the Project Matrix: public BasicCamera3D(float fieldOfView, float aspectRatio, float clipStart, float clipEnd, Vector3 cameraPosition, Vector3 cameraLookAt) { projection_fieldOfView = MathHelper.ToRadians(fieldOfView); projection_aspectRatio = aspectRatio; projection_clipstart = clipStart; projection_clipend = clipEnd; matrix_projection = Matrix.CreatePerspectiveFieldOfView(projection_fieldOfView, aspectRatio, clipStart, clipEnd); view_cameraposition = cameraPosition; view_cameralookat = cameraLookAt; matrix_view = Matrix.CreateLookAt(cameraPosition, cameraLookAt, Vector3.Up); } BasicCamera3D gameCamera = new BasicCamera3D(45f, GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000f, new Vector3(0, 0, 8), new Vector3(0, 0, 0)); This creates a sort of "Top-Down" camera, with 8 (still don't get the unit type here - it's not pixels I guess?) But, if I try to position the camera at the side to make "Side-View" or "Reverse Side View" camera, the camera is rotating to much until it's turned around it a couple of times. I render the boxes at: new Vector3(-1, 0, 0) new Vector3(0, 0, 0) new Vector3(1, 0, 0) new Vector3(1, 0, 1) and with the Top-Down camera it shows good, but I don't get how I can make the camera show the side or 45 degrees from top (Like 3rd person action games) because the logic doesn't make sense. Also, since every object you render needs a new BasicEffect with a new projection/view/world - can you still use the "same" camera always so you don't have to create a new View/Matrix and such for each object. It's seems weird. If someone could help me get the camera to navigate around my objects "naturally" so I can be able to set a few predtermined views to choose from it would be really helpful. Are there some sort of algorithm to calculate the view for this and perhaps not simply one value? Examples: Top-Down-View: I have an object at 0, 0, 0 when I turn the right stick on the Xbox 360 Controller it should rotate around that object kind of, not flip and turn upside down, disappear and then magically appear as a tiny dot somewhere I have no clue where it is like it feels like it does now. Side-View: I have an object at 0, 0, 0 when I rotate to sides or up and down, the camera should be able to show a little more of the periphery to each side (depending on which you look at), and the same while moving up or down.

    Read the article

  • WebCenter Innovation Award Winners

    - by Michael Snow
    Of course, here on our WebCenter blog – we’d like to highlight and brag about our great WebCenter winners. The 2012 WebCenter Innovation Award Winners University of Louisville Location: Louisville, KY, USA Industry: Higher Education Fusion Middleware Products: WebCenter Portal, WebCenter Content, JDeveloper, WebLogic, Oracle BI, Oracle IdM University of Louisville is a state supported research university Statewide Informatics Network to improve public health The University of Louisville has implemented WebCenter as part of the LOUI (Louisville Informatics Institute) Initiative, a Statewide Informatics Network, which will improve public healthcare and lower cost through the use of novel technology and next generation analytics, decision support and innovative outcomes-based payment systems. ---------- News Limited Country/Region: Australia Industry: News/Media FMW Products: WebCenter Sites Single platform running websites for 50% of Australia's newspapers News Corp is running half of Australia's newspaper websites on this shared platform powered by Oracle WebCenter Sites and have overtaken their nearest competitors and are now leading in terms of monthly page impressions. At peak they have over 250 editors on the system publishing in real-time.Sites include: www.newsspace.com.au, www.news.com.au, www.theaustralian.com.au and many others ------ Life Technologies Corp. Country/Region: Carlsbad, CA, USAIndustry: Life SciencesFMW Products: WebCenter Portal, SOA Suite Life Technologies Corp. is a global biotechnology tools company dedicated to improving the human condition with innovative life science products. They were awarded an innovation award for their solution utilizing WebCenter Portal for remotely monitoring & repairing biotech instruments. They deployed WebCenter as a portal that accesses Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired.  The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers.  The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. ----- China Mobile Jiangsu China Mobile Jiangsu is one of the biggest subsidiaries of China Mobile. It has over 25,000 employees and 40 million mobile subscribers. Country/Region: Jiangsu, China Industry: Telecommunications FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM They were awarded an Innovation Award for their new employee platform powered by WebCenter Portal is designed to serve their 25,000+ employees and help them drive collaboration & productivity. JSMCC (Chian Mobile Jiangsu) Employee Enterprise Portal and Collaboration Platform. It is one of the China Mobile’s most important IT innovation projects. The new platform is designed to serve for JSMCC’s 25000+ employees and to help them improve the working efficiency, changing their traditional working mode to social ways, encouraging employees on business collaboration and innovation. The solution is built on top of Oracle WebCenter Portal Framework and WebCenter Spaces while also leveraging Weblogic Server, UCM, OID, OAM, SES, IRM and Oracle Database 11g. By providing rich collaboration services, knowledge management services, sensitive document protection services, unified user identity management services, unified information search services and personalized information integration capabilities, the working efficiency of JSMCC employees has been greatly improved. Main Functionality : Information portal, office automation integration, personal space, group space, team collaboration with web2.0 services, unified search engine for multiple data sources, document management and protection. SSO for multiple platforms. -------- LADWP – Los Angeles Department for Water and Power Los Angeles Department of Water and Power (LADWP) is the largest public utility company in United States with over 1.6 Million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another city department. Country/Region: US – Los Angeles, CA Industry: Public Utility FMW Products: WebCenter Portal, WebCenter Content, JDeveloper, SOA Suite, IdM The new infrastructure consists of: Oracle WebCenter Portal including mobile portal Oracle WebCenter Content for Content Management and Digital Asset Management (DAM) Oracle OAM (IDM, OVD, OAM) integrated with AD for enterprise identity management Oracle Siebel for CRM Oracle DB Oracle SOA Suite for integration of various subsystems and back end systems  The new portal's features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser)

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Do MORE with WebCenter - Webcast Overview & TIES Tour

    - by Michael Snow
    Today's post is from Michelle Huff, Senior Director, Product Management, Oracle WebCenter `````````````````  In case you missed it, I presented on a webcast yesterday focused on how you can “Do More with Oracle WebCenter – Expand Beyond Content Management.” As you may remember, we rebranded Oracle’s Enterprise Content Management (ECM) Suite, which some people knew by the wonderfully techie three-letter acronyms -- UCM, URM & IPM -- to Oracle WebCenter Content last year. Since it’s a unified ECM platform, I’ve seen many customers over the years continue to expand the number of content-centric solutions and application integrations powered by WebCenter throughout their organizations. But, did you know WebCenter also provides portal, collaboration and web experience management capabilities as well? This enables you to leverage your existing investment in the WebCenter platform as well as the information you’re managing to create engaging sites, collaborative spaces, or self-service portals and composite applications. In the webcast I walked through six different ways that you can do more with WebCenter: Collaborative content contribution and sharing environment Share content across intranets and extranets Combine content in composite applications Create targeted online experiences Manage interactive social experiences Optimize multi-channel customer experiences Joining me on the call was Greg Utecht with TIES. TIES is a joint powers cooperative owned by 46 Minnesota school districts, represents 514 schools – and provides software applications, hardware and software, internet service and professional development designed by educators for education. I was having a lot of fun over the past few days talking with Greg about the TIES implementation and future plans with WebCenter. He joined me on the call for a little Q&A to explain how he’s using WebCenter today for their iContent implementation for document management, records management and archiving. And also covered how they have expanded their implementation to create a collaborative space called their HRPay System with WebCenter to facilitate collaboration and to better engage their users within the school districts. During our conversation a few questions came from the audience about their implementation. They were curious to see how the system looked – so let’s take a peak. This first screenshot shows the screen that a human resources or payroll worker in one of our member districts would see upon logging in, based on their credentials and role in their district. This shows the result of clicking on the SUBSCRIBE link on the main page. It allows the user to subscribe to parts of the portal which will e-mail him/her when those are updated in any way. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Resources link. This shows the screen that a human resources or payroll worker in one of our member districts would see upon clicking on the Finance Advisory link. It shows the discussion threads and document sharing areas. This shows the screen that appears when the forum topic on the preceding screen is clicked. This shows the screen portlet up close with shared documents. This shows the screen that appears when a shared document is clicked on. Note that there is also a download button and an update button, meaning people can work on these collaboratively. If you missed the webcast, check it out! You can watch the replay OnDemand HERE. If you attended the webcast, thanks for joining - I hoped you learned a little from the session. I learned that kids are getting digital report cards today! Wow, have times changed with technology. Uh oh, is this when I start saying “You know, back in my days…?”

    Read the article

  • Designing for an algorithm that reports progress

    - by Stefano Borini
    I have an iterative algorithm and I want to print the progress. However, I may also want it not to print any information, or to print it in a different way, or do other logic. In an object oriented language, I would perform the following solutions: Solution 1: virtual method have the algorithm class MyAlgoClass which implements the algo. The class also implements a virtual reportIteration(iterInfo) method which is empty and can be reimplemented. Subclass the MyAlgoClass and override reportIteration so that it does what it needs to do. This solution allows you to carry additional information (for example, the file unit) in the reimplemented class. I don't like this method because it clumps together two functionalities that may be unrelated, but in GUI apps it may be ok. Solution 2: observer pattern the algorithm class has a register(Observer) method, keeps a list of the registered observers and takes care of calling notify() on each of them. Observer::notify() needs a way to get the information from the Subject, so it either has two parameters, one with the Subject and the other with the data the Subject may pass, or just the Subject and the Observer is now in charge of querying it to fetch the relevant information. Solution 3: callbacks I tend to see the callback method as a lightweight observer. Instead of passing an object, you pass a callback, which may be a plain function, but also an instance method in those languages that allow it (for example, in python you can because passing an instance method will remain bound to the instance). C++ however does not allow it, because if you pass a pointer to an instance method, this will not be defined. Please correct me on this regard, my C++ is quite old. The problem with callbacks is that generally you have to pass them together with the data you want the callback to be invoked with. Callbacks don't store state, so you have to pass both the callback and the state to the Subject in order to find it at callback execution, together with any additional data the Subject may provide about the event is reporting. Question My question is relative to the fact that I need to implement the opening problem in a language that is not object oriented, namely Fortran 95, and I am fighting with my usual reasoning which is based on python assumptions and style. I think that in Fortran the concept is similar to C, with the additional trouble that in C you can store a function pointer, while in Fortran 95 you can only pass it around. Do you have any comments, suggestions, tips, and quirks on this regard (in C, C++, Fortran and python, but also in any other language, so to have a comparison of language features that can be exploited on this regard) on how to design for an algorithm that must report progress to some external entity, using state from both the algorithm and the external entity ?

    Read the article

  • Heading out to Dallas GiveCamp 2011

    - by dotgeek
    The day has finally arrived for twelve local charities here in the Dallas area, when they’ll get some help from various local Developers with their website initiative needs at this years Dallas GiveCamp. I’m really looking forward to helping out at this year event and what I hope will be the start of many more GiveCamps to follow. Similar to Habitat for Humanity, where people gather to help build and improve homes for people in need, GiveCamp brings together programmers and equips them with the virtual tools they need to build and improve their existing websites. Tonight is when things will kickoff for this weekends events and teams will start working on their various projects. The building continues on through the night then and all the way through until Sunday afternoon. The end goal for the teams and charities is to have a completed and working website for each charity to begin using and turn over all the production code and digital assets to them. None of this would be possible with out the great sponsors we have returning once again and their donations of various products to help these charities out with their projects, like Telerik's CMS product Sitefinity 4.0, paired with a year of hosting from Verio to mention just a few of them. Just like the skilled builders who might help train volunteers in the use of a nail gun in building a house. Training is also available here on site for the Developers and these local Charities. Giving them all the skills in how to manage and use these products, from site development and then into actual production is a key to the success of this weekends event.     Tonight's training sessions will kick off with a real treat from Giovanni Gallucci, as he speaks about Social Media for NPOs and then later Gabe Sumner from Telerik will begin a training session on Sitefinity for Developers. These training sessions will continue through out the weekend with .Net Nuke and Mojo Portal sessions also planned as well. If you’re a developer and would like to help out in the future, then check in your area and with your local User Groups to find out if you already have a GiveCamp near you to help out. If you don’t have one available, then consider starting up a local GiveCamp and then you too can help Code it Forward. About GiveCamp GiveCamp is a weekend-long event where software developers, designers, and database administrators donate their time to create custom software for non-profit organizations. This custom software could be a new website for the nonprofit organization, a small data-collection application to keep track of members, or a application for the Red Cross that automatically emails a blood donor three months after they’ve donated blood to remind them that they are now eligible to donate again. The only limitation is that the project should be scoped to be able to be completed in a weekend. During GiveCamp, developers are welcome to go home in the evenings or camp out all weekend long. There are usually food and drink provided at the event. There are sometimes even game systems set up for when you and your need a little break! Overall, it’s a great opportunity for people to work together, developing new friendships, and doing something important for their community. At GiveCamp, there is an expectation of “What Happens at GiveCamp, Stays at GiveCamp”. Therefore, all source code must be turned over to the charities at the end of the weekend (developers cannot ask for payment) and the charities are responsible for maintaining the code moving forward (charities cannot expect the developers to maintain the codebase).

    Read the article

  • After 10 Years, MySQL Still the Right Choice for ScienceLogic's "Best Network Monitoring System on the Planet"

    - by Rebecca Hansen
    ScienceLogic has a pretty fantastic network monitoring appliance.  So good in fact that InfoWorld gave it their "2013 Best Network Monitoring System on the Planet" award.  Inside their "ultraflexible, ultrascalable, carrier-grade" enterprise appliance, ScienceLogic relies on MySQL and has since their start in 2003.  Check out some of the things they've been able to do with MySQL and their reasons for continuing to use MySQL in these highlights from our new MySQL ScienceLogic case study. Science Logic's larger customers use their appliance to monitor and manage  20,000+ devices, each of which generates a steady stream of data and a workload that is 85% write. On a large system, the MySQL database: Averages 8,000 queries every second or about 1 billion queries a day Can reach 175,000 tables and up to 20 million rows in a single table Is 2 terabytes on average and up to 6 terabytes "We told our customers they could add more and more devices. With MySQL, we haven't had any problems. When our customers have problems, we get calls. Not getting calls is a huge benefit." Matt Luebke, ScienceLogic Chief Software Architect.? ScienceLogic was approached by a number of Big Data / NoSQL vendors, but decided against using a NoSQL-only solution. Said Matt, "There are times when you really need SQL. NoSQL can't show me the top 10 users of CPU, or show me the bottom ten consumer of hard disk. That's why we weren't interested in changing and why we are very interested in MySQL 5.6. It's great that it can do relational and key-value using memcached." The ScienceLogic team is very cautious about putting only very stable technology into their product, and according to Matt, MySQL has been very stable: "We've been using MySQL for 10 years and we have never had any reliability problems. Ever." ScienceLogic now uses SSDs for their write-intensive appliance and that change alone has helped them achieve a 5x performance increase. Learn more>> ScienceLogic MySQL Case Study MySQL 5.6 InnoDB Compression options for better SSD performance Tuning MySQL 5.6 for Great Product Performance - on demand webinar Developer and DBA Guide to MySQL 5.6 white paper Guide to MySQL and NoSQL: The Best of Both Worlds white paper

    Read the article

  • Is a university education really worth it for a good programmer?

    - by Jon Purdy
    The title says it all, but here's the personal side of it: I've been doing design and programming for about as long as I can remember. If there's a programming problem, I can figure it out. (Though admittedly StackOverflow has allowed me to skip the figuring out and get straight to the doing in many instances.) I've made games, esoteric programming languages, and widgets and gizmos galore. I'm currently working on a general-purpose programming language. There's nothing I do better than programming. However, I'm just as passionate about design. Thus when I felt leaving high school that my design skills were lacking, I decided to attend university for New Media Design and Imaging, a digital design-related major. For a year, I diligently studied art and programmed in my free time. As the next year progressed, however, I was obligated to take fewer art and design classes and more technical classes. The trouble was of course that these classes were geared toward non-technical students, and were far beneath my skill level at the time. No amount of petitioning could overcome the institution's reluctance to allow me to test out of such classes, and the major offered no promise for any greater challenge in the future, so I took the extreme route: I switched into the technical equivalent of the major, New Media Interactive Development. A lot of my credits moved over into the new major, but many didn't. It would have been infeasible to switch to a more rigorous technical major such as Computer Science, and having tutored Computer Science students at every level here, I doubt I would be exposed to anything that I haven't already or won't eventually find out on my own, since I'm so involved in the field. I'm now on track to graduate perhaps a year later than I had planned, which puts a significant financial strain on my family and my future self. My schedule continues to be bogged down with classes that are wholly unnecessary for me to take. I'm being re-introduced to subjects that I've covered a thousand times over, simply because I've always been interested in it all. And though I succeed in avoiding the cynical and immature tactic of failing to complete work out of some undeserved sense of superiority, I'm becoming increasingly disillusioned by the lack of intellectual stimulation. Further, my school requires students to complete a number of quarters of co-op work experience proportional to their major. My original major required two quarters, but my current requires three, delaying my graduation even more. To top it all off, college is putting a severe strain on my relationship with my very close partner of a few years, so I've searched diligently for co-op jobs in my area, alas to no avail. I'm now in my third year, and approaching that point past which I can no longer handle this. Either I keep my head down, get a degree no matter what it takes, and try to get a job with a company that will pay me enough to do what I love that I can eventually pay off my loans; or I cut my losses now, move wherever there is work, and in six months start paying off what debt I've accumulated thus far. So the real question is: is a university education really more than just a formality? It's a big decision, and one I can't make lightly. I think this is the appropriate venue for this kind of question, and I hope it sticks around for the sake of others who might someday find themselves in similar situations. My heartfelt thanks for reading, and in advance for your help.

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • Oracle EMEA News Digest - May 2014

    - by Steve Walker
    Systems Oracle introduced a technology preview of an OpenStack® distribution that allows Oracle Linux and Oracle VM users to work with the open source cloud software. This provides customers with additional choices and interoperability while taking advantage of the efficiency, performance, scalability, and security of Oracle Linux and Oracle VM. The distribution is delivered as part of the Oracle Linux and Oracle VM Premier Support offerings, at no additional cost. Oracle plans to work further with the OpenStack community to develop and enhance its enterprise-class capabilities to meet customer demands. Also in the Open Source arena, Oracle announced the general availability of MySQL Fabric. MySQL Fabric provides an integrated system that makes it simpler to manage groups of MySQL databases. It delivers both high availability - via failure detection and failover - and scalability through automated data sharding. Oracle Database, Middleware and Technology The company made two announcements for Oracle Tuxedo, the #1 application server for C, C++, COBOL and Java deployments in private cloud or traditional data center environments. With enhanced management and monitoring features and tighter integration with Oracle technologies, the latest release of Oracle Tuxedo 12c enables organizations to dramatically increase application throughput, while reducing total cost of ownership and time to market for new application development and deployment. Oracle also introduced the latest release of its mainframe application rehosting platform, Oracle Tuxedo ART 12c, to help organizations speed up migration projects and accelerate the adoption of the new environment by current IT staff. It enables organizations to accelerate the rehosting of IBM mainframe applications and greatly enhance management and supportability of the rehosted applications while reducing costs and risk. Applications According to new Oracle studies, B2B and B2C commerce professionals find integrated, omni-channel customer experiences increasingly valuable to their organizations, and are continuing to invest in technologies and digital content strategies to facilitate them. The studies—one for B2B and one for B2C—surveyed e-commerce professionals in business and technology departments from around the world. Although the priorities, success metrics, and technology investments differed between the two groups, customer acquisition and retention emerged as common themes across B2B and B2C. Growing market share and enhancing customer experience are cited as top investment areas for all e-commerce professionals. In product news, Oracle announced the latest release of Oracle Business Intelligence (BI) Applications (version 11.1.1.8.1, in case anyone asks). It includes prebuilt connectors between Oracle Procurement and Spend Analytics and Oracle’s JD Edwards. Additionally, a new Oracle Human Resources Analytics module for developing and maintaining a skilled workforce has been introduced. In use at more than 4,000 companies worldwide, Oracle BI Applications support leading enterprise applications, including Oracle E-Business Suite, Oracle’s PeopleSoft, Oracle's Siebel CRM, Oracle’s JD Edwards EnterpriseOne offering high-performing analytics at a lower cost. Industries For the Communications Industry, Oracle has launched a new release of the Oracle Communications Core Session Manager. This gives CSPs a new way to design, deploy and manage complex networking services and embrace next-generation technology, It provides them with an immediate entry point for  network function virtualization (NFV) efforts, allowing them to realize immediate benefits associated with network virtualization – including increased service agility and improved network resource sharing. And for the Utilities Industry, Oracle is releasing solutions with new business features and enhanced technical architecture that help position utilities for success now and into the future. Oracle has provided new releases for its customer information system,  meter data management system, customer self-service solution and mobile workforce management solution.

    Read the article

  • WPF Control Toolkits Comparison for LOB Apps

    In preparation for a new WPF project Ive been researching options for WPF Control toolkits.  While we want a lot of the benefits of WPF, the application is a fairly typical line of business application (LOB).  So were not focused on things like media and animations, but instead a simple, solid, intuitive, and modern user interface that allows for well architected separation of business logic and presentation layers. While WPF is mature, it hasnt lived the long life that Winforms has yet, so there is still a lot of room for third party and community control toolkits to fill the gaps between the controls that ship with the Framework.  There are two such gaps I was concerned about.  As this is an LOB app, we have needs for presenting lots of data and not surprisingly much of it is in grid format with the need for high performance, grouping, inline editing, aggregation, printing and exporting and things that weve been doing with LOB apps for a long time.  In addition we want a dashboard style for the UI in which the user can rearrange and shrink and grow tiles that house the content and functionality.  From a cost perspective, building these types of well performing controls from scratch doesnt make sense.  So I evaluated what you get from the .NET Framework along with a few different options for control toolkits.  I tried to be fairly thorough, but know that this isnt a detailed benchmarking comparison or intense evaluation.  Its just meant to be a feature set comparison to be used when thinking about building an LOB app in WPF.  I tried to list important feature differences and notes based on my experience with the trial versions and what I found in documentation and reference materials and samples.  Ive also listed the importance of the controls based on how I think they are needed in LOB apps.  There are several toolkits available, but given I dont have unlimited time, I picked just a few.  Maybe Ill add on more later.  The toolkits I compared are: Teleriks RadControls for WPF since I had heard some good things about Telerik Infragistics NetAdvantage WPF since both I and the customer have some experience with the vendors tools WPF Toolkit on codeplex since many of my colleagues have used it Blacklight codeplex project which had WPF support for the Tile View control  (with Release 4.3 WPF is not going to be supported in favor of focusing only on SilverLight controls, so I dropped that from the comparison) Click Here to Download the WPF Control Toolkits Comparison Hopefully this helps someone out there.  Feel free to post a comment on your experiences or if you think something I listed is incorrect or missing.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • x axis detection issues platformer starter kit

    - by dbomb101
    I've come across a problem with the collision detection code in the platformer starter kit for xna.It will send up the impassible flag on the x axis despite being nowhere near a wall in either direction on the x axis, could someone could tell me why this happens ? Here is the collision method. /// <summary> /// Detects and resolves all collisions between the player and his neighboring /// tiles. When a collision is detected, the player is pushed away along one /// axis to prevent overlapping. There is some special logic for the Y axis to /// handle platforms which behave differently depending on direction of movement. /// </summary> private void HandleCollisions() { // Get the player's bounding rectangle and find neighboring tiles. Rectangle bounds = BoundingRectangle; int leftTile = (int)Math.Floor((float)bounds.Left / Tile.Width); int rightTile = (int)Math.Ceiling(((float)bounds.Right / Tile.Width)) - 1; int topTile = (int)Math.Floor((float)bounds.Top / Tile.Height); int bottomTile = (int)Math.Ceiling(((float)bounds.Bottom / Tile.Height)) - 1; // Reset flag to search for ground collision. isOnGround = false; // For each potentially colliding tile, for (int y = topTile; y <= bottomTile; ++y) { for (int x = leftTile; x <= rightTile; ++x) { // If this tile is collidable, TileCollision collision = Level.GetCollision(x, y); if (collision != TileCollision.Passable) { // Determine collision depth (with direction) and magnitude. Rectangle tileBounds = Level.GetBounds(x, y); Vector2 depth = RectangleExtensions.GetIntersectionDepth(bounds, tileBounds); if (depth != Vector2.Zero) { float absDepthX = Math.Abs(depth.X); float absDepthY = Math.Abs(depth.Y); // Resolve the collision along the shallow axis. if (absDepthY < absDepthX || collision == TileCollision.Platform) { // If we crossed the top of a tile, we are on the ground. if (previousBottom <= tileBounds.Top) isOnGround = true; // Ignore platforms, unless we are on the ground. if (collision == TileCollision.Impassable || IsOnGround) { // Resolve the collision along the Y axis. Position = new Vector2(Position.X, Position.Y + depth.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } //This is the section which deals with collision on the x-axis else if (collision == TileCollision.Impassable) // Ignore platforms. { // Resolve the collision along the X axis. Position = new Vector2(Position.X + depth.X, Position.Y); // Perform further collisions with the new bounds. bounds = BoundingRectangle; } } } } } // Save the new bounds bottom. previousBottom = bounds.Bottom; }

    Read the article

  • Engine Rendering pipeline : Making shaders generic

    - by fakhir
    I am trying to make a 2D game engine using OpenGL ES 2.0 (iOS for now). I've written Application layer in Objective C and a separate self contained RendererGLES20 in C++. No GL specific call is made outside the renderer. It is working perfectly. But I have some design issues when using shaders. Each shader has its own unique attributes and uniforms that need to be set just before the main draw call (glDrawArrays in this case). For instance, in order to draw some geometry I would do: void RendererGLES20::render(Model * model) { // Set a bunch of uniforms glUniformMatrix4fv(.......); // Enable specific attributes, can be many glEnableVertexAttribArray(......); // Set a bunch of vertex attribute pointers: glVertexAttribPointer(positionSlot, 2, GL_FLOAT, GL_FALSE, stride, m->pCoords); // Now actually Draw the geometry glDrawArrays(GL_TRIANGLES, 0, m->vertexCount); // After drawing, disable any vertex attributes: glDisableVertexAttribArray(.......); } As you can see this code is extremely rigid. If I were to use another shader, say ripple effect, i would be needing to pass extra uniforms, vertex attribs etc. In other words I would have to change the RendererGLES20 render source code just to incorporate the new shader. Is there any way to make the shader object totally generic? Like What if I just want to change the shader object and not worry about game source re-compiling? Any way to make the renderer agnostic of uniforms and attributes etc?. Even though we need to pass data to uniforms, what is the best place to do that? Model class? Is the model class aware of shader specific uniforms and attributes? Following shows Actor class: class Actor : public ISceneNode { ModelController * model; AIController * AI; }; Model controller class: class ModelController { class IShader * shader; int textureId; vec4 tint; float alpha; struct Vertex * vertexArray; }; Shader class just contains the shader object, compiling and linking sub-routines etc. In Game Logic class I am actually rendering the object: void GameLogic::update(float dt) { IRenderer * renderer = g_application->GetRenderer(); Actor * a = GetActor(id); renderer->render(a->model); } Please note that even though Actor extends ISceneNode, I haven't started implementing SceneGraph yet. I will do that as soon as I resolve this issue. Any ideas how to improve this? Related design patterns etc? Thank you for reading the question.

    Read the article

  • Incomplete upgrade 12.04 to 12.10

    - by David
    Everything was running smoothly. Everything had been downloaded from Internet, packages had been installed and a prompt asked for some obsolete programs/files to be removed or kept. After that the computer crashed and and to manually force a shutdown. I turned it on again and surprise I was on 12.10! Still the upgrade was not finished! How can I properly finish that upgrade? Here's the output I got in the command line after following posted instructions: i astrill - Astrill VPN client software i dayjournal - Simple, minimal, digital journal. i gambas2-gb-form - A gambas native form component i gambas2-gb-gtk - The Gambas gtk component i gambas2-gb-gtk-ext - The Gambas extended gtk GUI component i gambas2-gb-gui - The graphical toolkit selector component i gambas2-gb-qt - The Gambas Qt GUI component i gambas2-gb-settings - Gambas utilities class i A gambas2-runtime - The Gambas runtime i google-chrome-stable - The web browser from Google i google-talkplugin - Google Talk Plugin i indicator-keylock - Indicator for Lock Keys i indicator-ubuntuone - Indicator for Ubuntu One synchronization s i A language-pack-kde-zh-hans - KDE translation updates for language Simpl i language-pack-kde-zh-hans-base - KDE translations for language Simplified C i libapt-inst1.4 - deb package format runtime library idA libattica0.3 - a Qt library that implements the Open Coll idA libbabl-0.0-0 - Dynamic, any to any, pixel format conversi idA libboost-filesystem1.46.1 - filesystem operations (portable paths, ite idA libboost-program-options1.46.1 - program options library for C++ idA libboost-python1.46.1 - Boost.Python Library idA libboost-regex1.46.1 - regular expression library for C++ i libboost-serialization1.46.1 - serialization library for C++ idA libboost-signals1.46.1 - managed signals and slots library for C++ idA libboost-system1.46.1 - Operating system (e.g. diagnostics support idA libboost-thread1.46.1 - portable C++ multi-threading i libcamel-1.2-29 - Evolution MIME message handling library i libcmis-0.2-0 - CMIS protocol client library i libcupsdriver1 - Common UNIX Printing System(tm) - Driver l i libdconf0 - simple configuration storage system - runt i libdvdcss2 - Simple foundation for reading DVDs - runti i libebackend-1.2-1 - Utility library for evolution data servers i libecal-1.2-10 - Client library for evolution calendars i libedata-cal-1.2-13 - Backend library for evolution calendars i libedataserver-1.2-15 - Utility library for evolution data servers i libexiv2-11 - EXIF/IPTC metadata manipulation library i libgdu-gtk0 - GTK+ standard dialog library for libgdu i libgdu0 - GObject based Disk Utility Library idA libgegl-0.0-0 - Generic Graphics Library idA libglew1.5 - The OpenGL Extension Wrangler - runtime en i libglew1.6 - OpenGL Extension Wrangler - runtime enviro i libglewmx1.6 - OpenGL Extension Wrangler - runtime enviro i libgnome-bluetooth8 - GNOME Bluetooth tools - support library i libgnomekbd7 - GNOME library to manage keyboard configura idA libgsoap1 - Runtime libraries for gSOAP i libgweather-3-0 - GWeather shared library i libimobiledevice2 - Library for communicating with the iPhone i libkdcraw20 - RAW picture decoding library i libkexiv2-10 - Qt like interface for the libexiv2 library i libkipi8 - library for apps that want to use kipi-plu i libkpathsea5 - TeX Live: path search library for TeX (run i libmagickcore4 - low-level image manipulation library i libmagickwand4 - image manipulation library i libmarblewidget13 - Marble globe widget library idA libmusicbrainz4-3 - Library to access the MusicBrainz.org data i libnepomukdatamanagement4 - Basic Nepomuk data manipulation interface i libnux-2.0-0 - Visual rendering toolkit for real-time app i libnux-2.0-common - Visual rendering toolkit for real-time app i libpoppler19 - PDF rendering library i libqt3-mt - Qt GUI Library (Threaded runtime version), i librhythmbox-core5 - support library for the rhythmbox music pl i libusbmuxd1 - USB multiplexor daemon for iPhone and iPod i libutouch-evemu1 - KernelInput Event Device Emulation Library i libutouch-frame1 - Touch Frame Library i libutouch-geis1 - Gesture engine interface support i libutouch-grail1 - Gesture Recognition And Instantiation Libr idA libx264-120 - x264 video coding library i libyajl1 - Yet Another JSON Library i linux-headers-3.2.0-29 - Header files related to Linux kernel versi i linux-headers-3.2.0-29-generic - Linux kernel headers for version 3.2.0 on i linux-image-3.2.0-29-generic - Linux kernel image for version 3.2.0 on 64 i mplayerthumbs - video thumbnail generator using mplayer i myunity - Unity configurator i A openoffice.org-calc - office productivity suite -- spreadsheet i A openoffice.org-writer - office productivity suite -- word processo i python-brlapi - Python bindings for BrlAPI i python-louis - Python bindings for liblouis i rts-bpp-dkms - rts-bpp driver in DKMS format. i system76-driver - Universal driver for System76 computers. i systemconfigurator - Unified Configuration API for Linux Instal i systemimager-client - Utilities for creating an image and upgrad i systemimager-common - Utilities and libraries common to both the i systemimager-initrd-template-am - SystemImager initrd template for amd64 cli i touchpad-indicator - An indicator for the touchpad i ubuntu-tweak - Ubuntu Tweak i A unity-lens-utilities - Unity Utilities lens i A unity-scope-calculator - Calculator engine i unity-scope-cities - Cities engine i unity-scope-rottentomatoes - Unity Scope Rottentomatoes

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.?

    - by Asian Angel
    When it comes to computers, Windows definitely rules the desktop in comparison to other operating systems. What we would like to know this week is if you would actually be willing to give up using Windows altogether and move to a different operating system on your computers. Note: This week’s Ask the Readers post is posing a hypothetical situation, so please refrain from starting arguments or a flame war in the comments. Good reasoned discussion is always welcome. There is no doubt that Windows is the dominant operating system in use today. Everywhere you go or look it is easy to find computers with Windows installed such as at work, home, the library, government offices, and more. For many people it is the operating system that they know and are comfortable with, which makes changing to a different operating system less appealing. Adding to the preference for Windows (or dependency based on your view) is the custom software that many businesses use on a daily basis. Throw in the high volume of people who depend on and use Microsoft Office as a standard for their business documents and it is little wonder that Windows is so dominant. So what would you use if you did decide to take a break from or permanently move away from Windows? If your choice is Linux then you have a large and wonderful variety of distributions to choose from based on what you want out of your system. Want a distribution that is easy to work with? You could choose Ubuntu, Linux Mint, or others that are engineered to be ready to go “out of the box”. Like a challenge? Perhaps Arch Linux is more your style. One of the most attractive features of all about Linux is the price…it is very hard to beat free! Maybe Mac OS X sounds like the perfect choice. It has a certain mystique and elegance associated with it and many OS X fans refuse to use anything else if given a choice. Then there is the soon to be released Chrome OS with its’ emphasis on cloud computing. This is a system that is definitely focused on being as low-maintenance and hassle-free as possible. Quick on, quick off, minimalist, and made to be portable. All of the system’s updates will occur automatically leaving you free to work and play in the cloud. But it does have its’ limitations…no installing all of those custom apps that you love using on Windows or other systems…it is literally all about the browsing window and web apps. So there you have it. If the opportunity presented itself would you, could you give Windows up and use a different operating system? Would it be easy or hard for you to do? Perhaps it would not really matter so long as you could do what you needed or wanted to do on a computer. And maybe this is the perfect time to try something new and find out…that new favorite operating system could be just an install disc away. Let us know your thoughts in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Brothers Mario – Epic Gangland Style Mario Brothers Movie Trailer [Video] Score Awesome Games on the Cheap with the Humble Indie Bundle Add a Colorful Christmas Theme to Your Windows 7 Desktop This Windows Hack Changes the Blue Screen of Death to Red Edit Images Quickly in Firefox with Pixlr Grabber Zoho Writer, Sheet, and Show Now Available in Chrome Web Store

    Read the article

  • Isis Finally Rolls Out

    - by David Dorf
    Google has rolled their wallet out for several chains; I see the NFC readers in Walgreen's when I'm sent their for milk.  But Isis has been relatively quiet until now.  As of last week they have finally launched in their two test cities: Austin, and Salt Lake City.  Below are the supported carriers and phones as of now, but more phones will be added later. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} AT&T supports: HTC One™ X, LG Escape™, Samsung Galaxy Exhilarate™, Samsung Galaxy S® III, Samsung Galaxy Rugby Pro™ T-Mobile supports: Samsung Galaxy S® II, Samsung Galaxy S® III, Samsung Galaxy S® Relay 4G Verizon supports: Droid Incredible 4G LTE. Of course iPhone owners have no wallet since Apple didn't included an NFC chip. To start using Isis, you have to take your NFC-capable phone to your carrier's store to get the SIM replaced with a more sophisticated one that has a secure element configured for Isis.  The "secure element" is the cryptographic logic that secures mobile payments.  Carriers like the secure element in the SIM while non-carriers (like Google) prefer the secure element in the phone's electronics. (I'm not entirely sure if you could support both Isis and Google Wallet on the same phone.  Anybody know?) Then you can download the Isis app from Google Play and load your cards.  Most credit cards are supported, and there's a process to verify the credit cards are valid.  Then you can select from the list of participating retailers to "follow."  Selecting a retailer allows that retailer to give you offers via the app. The app is well done and easy to use.  You can select a default payment type and also switch between them easily.  When the phone is tapped on the reader, there are two exchanges of information.  The payment information is transferred, and then the Isis "SmartTap" information which includes optional loyalty number and digital coupons.  Of course the value of mobile wallets comes from the ease of handling all three data types (i.e. payment, loyalty, offers). There are several advertisements for Isis running now, and my favorite is below.

    Read the article

  • Creating a Strong Bridge to the Post PC World

    - by Webgui
    Moving from location to location requires strong roads.  When crossing a barrier though, like a body of water or valley, we are required to build a strong bridge to get us from point A to point B in a way that is fast, safe, and easy.Yet we are not talking here about driving a car or riding a bus.  As we in the computing world are evidencing the move to the post-PC era, modernizing and migrating legacy applications to harness the power of HTML5 web, cloud and mobile is one of the most difficult challenges enterprises have faced.  Constant technological changes have weakened the business value of legacy systems, which have been developed over the years through huge investments.  There are several risks of course in this move.  Do you choose to simply rewrite code of legacy apps and transform them to HTML5 one by one?  This is quite expensive (according to research firm Gartner, the cost is $6 - $26 per line of code).  Of course, the pace of the rewriting process is very slow – around 170 lines per day for each developer – which slows down business productivity in a world in which no organization can afford to fall behind.  Other questions include whether the new cloud-based apps will have the same functionality as the trusted applications that worked for you for years.  How will the user experience be affected?  And of course, what about data security?  So we are faced with the challenge of building a sturdy bridge to stabilize our move in order to allow us to confidently and easily move our legacy applications into the post-PC era.   We at Gizmox are excited to release the first downloadable Community Technology Preview (CTP) of our Instant CloudMove Transposition Studio.Developers: To download the tool, and try it out for yourself, please visit http://www.visualwebgui.com/download.aspx.The CTP is the first and only tool-based solution allowing any Microsoft Visual Studio developer to extend VB6 and .NET enterprise client/server applications into HTML5 web, cloud and mobile applications, including the ability to upgrade their code and UI while doing so.   It is the only solution to fully replicate enterprise desktop applications behavior in the post-PC era.  With Instant CloudMove, the transposed application is available on any mobile or tablet device, browser and across any client operating system. Moreover, the extended application logic and data remains on the server behind the fire-wall and therefore the application’s front end is secured-by-design.   We would love for you to try out the tool for yourselves and let us know what you think.  How are you finding the move?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >