Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 123/222 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • MATLAB: Best fitness vs mean fitness, initial range

    - by Sa Ta
    Based on the example of Rastrigin's function. At the plot function, if I chose 'best fitness', on the same graph 'mean fitness' will also be plotted. I understand well about 'best fitness' whereby it plots the best function value in each generation versus iteration number. It will reach value zero after some times. I don't understand about 'mean fitness'in the graph plotted. What do those 'mean fitness' values mean? How does the 'mean fitness' graph help to understand Rastrigin's function? What are the meaning of the term initial population, initial score and initial range? I wish to have a better understanding of these terms. The default value for initial range is [0,1]. Does it mean that 0 is the lower bound (lb) and 1 is the upper bound (ub)? Do these values interfere with the lb and ub values I set in the constraints? I try to better understand about lb and ub. If my lb is 0 and ub is 5, does it mean that my final point values will be within 0 and 5? If I know the lb and ub for my problem is between 0 and 5, do I just set the initial range as [0,5] at all times and may I assume that this is the best option for initial range, and I need not try it with any other values?

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    Possible Duplicate: What Forum Software should I use? I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

  • What's a good entity hierarchy for a 2D game?

    - by futlib
    I'm in the process of building a new 2D game out of some code I wrote a while ago. The object hierarchy for entities is like this: Scene (e.g. MainMenu): Contains multiple entities and delegates update()/draw() to each Entity: Base class for all things in a scene (e.g. MenuItem or Alien) Sprite: Base class for all entities that just draw a texture, i.e. don't have their own drawing logic Does it make sense to split up entities and sprites up like that? I think in a 2D game, the terms entity and sprite are somewhat synonymous, right? But I do believe that I need some base class for entities that just draw a texture, as opposed to drawing themselves, to avoid duplication. Most entities are like that. One weird case is my Text class: It derives from Sprite, which accepts either the path of an image or an already loaded texture in its constructor. Text loads a texture in its constructor and passes that to Sprite. Can you outline a design that makes more sense? Or point me to a good object-oriented reference code base for a 2D game? I could only find 3D engine code bases of decent code quality, e.g. Doom 3 and HPL1Engine.

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Profit : August, 2012

    - by user462779
    August 2012 issue of Profit is now available online. Way back in 2003, I wrote my first feature for Profit. It was titled “Everything You Always Wanted to Know About Application Servers (But Were Afraid To Ask),” and it discussed “cutting-edge” technologies like portals and XML and the brand-new Java Platform, Enterprise Edition (Java EE; we’re now on Java EE 7). But despite the dated terms I used in my Profit debut, I noticed something in rereading that old story that has stayed constant: mid-tier technology is where innovative enterprise IT projects happen. It may have been XML in 2003, but it’s SOA in 2012. While preparing the August issue of Profit was more than just a stroll down memory lane for me, it has provided a nice bit of perspective about what changes and what doesn’t in this dynamic IT industry. Technologies continuously evolve—some become standard practice, some are revived or reinvented, and some are left by the wayside. But the drive to innovate and the desire to succeed are business principles that never go out of fashion. Also, be sure to check out the Profit JD Edwards Special Issue 2012 (PDF), featuring partner profiles, customer successes, and Oracle executive interviews. The Middleware Advantage Three ways a flexible, integrate software layer can deliver a competitive edge Playing to Win Electronic Arts’ superefficient hub processes millions of online gaming transactions every day. Adjustable Loans With Oracle Exadata, Reliance Commercial Finance keeps pace with India’s commercial loan market. Future Proof To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate. Spring Training Knowledge and communication help Jackson Hewitt’s Tim Bechtold get seasonal workers in top shape. Keeping Online Customers Happy Customers worldwide are comfortable with online service—but are companies meeting customers’ needs?

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • Python in Finance by Yuxing Yan, Packt Publishing Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2014/06/04/python-in-finance-by-yuxing-yan-packt-publishing-book-review.aspx I picked Python in Finance from Packt Publishing to review expecting to bore myself with complex algorithms and senseless formulas while seeing little actual Python in action, indeed at 400 pages plus it may seem so. But, it turned out to be quite the opposite. I learned a lot about practical implementations of various Python modules as SciPy, NumPy and several more, I think they empower a developer a lot. No wonder Python is on the track to become a de-facto scientist language of choice! But I am not going to compromise the truth, the book does discuss numerous financial terms, many of them, and this is where the enormous power of this book is coming from: it is like standing on the shoulders of a giant. Python is that giant - flexible and powerful, yet very approachable. The TOC is very detailed thanks to Packt, any one can see what financial algorithms are covered, I am only going to name a few which I had most fun with (though all of them are covered in enough details): Fama*, Fat Tail, ARCH, Monte-Carlo and of course the volatility smile! I am under an impression this book is best suited for students in Finance, especially those who are about to join the workforce, but I suspect the material in this book is very well suited for mature Financists, an investor who has some programming skills and wants to benefit from it, or even a programmer, or a mathematician who already knows Python or any other language, but wants to have fun in Quantitative Finance and earn a few buck! Pure fun, real results, tons of practical insight from reading data from a file to downloading trade data from Yahoo! Lastly, I need to complement Yuxing – he is a talented teacher, this book could not be what it is otherwise. It is a 5 out of 5 product. Disclaimer: I received a  free copy of this book for review purposes from the publisher.

    Read the article

  • Advice: How to convince my newly annointed team lead against writing the code base from scratch

    - by shan23
    I work in a pretty reknowned MNC, and the module that I work in has been assigned to a new "lead". The code base is pretty huge (~130K or more, with inter dependencies on other modules) , but stable - some parts have grown ugly over the years, but its provably in working state. (Our products are running for years on them, even new ones). The problem is, our lead wants to rewrite the code from scratch, to encompass "finer granularity and a proactive design". I know in my guts thats not a very good idea, but how do I convince him/the rest of the team(who are pretty much more senior than me in terms of years of exp), without sounding too pedantic myself (Thou shalt not rewrite , as Joel et al have clear articles prohibiting it)? I have a good working relation with the person concerned, and don't want to ruin it, but neither do I want to be party to a decision which would surely plague us for years to come !! Any suggestions for a milder,yet effective approach ? Even accounts of how you have tackled such a situation to your liking would help me a lot! EDIT: The code base I'm talking about is not a product/GUI, but at kernel level with all the critical functionalities for our product. I hope now you know why i sound so apprehensive !!

    Read the article

  • Dealing with institutionalized programmers.

    - by Singleton
    Some times programmers who work in a project for long time tend to get institutionalized. It is difficult to convince them with reasoning. Even if we manage to convince them they will be adamant to take suggestion on board. How do we handle the situation without developing friction in team? Institutionalized in terms of practices. I recently joined in a project where build &release process was made so complicated with unnecessary roadblocks. My suggestion was we can get rid of some of the development overheads(like filling few spreadsheets) just by integrating defect management and version controlling tools (both are IBM-Rational tools integration can be very easy and one-off effort). Also by using tools like Maven & Ant (project involves java and some COTS products) build & release can be simplified and reduce manual errors& intervention. I managed to convince and ready to put efforts for developing proof of concept. But the ‘Senior’ developer is not willing to take it on board. One reason could be the current process makes him valuable in team.

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • School vs Self-Taught [duplicate]

    - by Joan Venge
    This question already has an answer here: Do I need a degree in Computer Science to get a junior Programming job? [closed] 8 answers Do you think university is a good learning environment or is it better to be autodidact? [closed] 3 answers Do you think formal education is necessary to gain strong programming skills? There are a lot of jobs that aren't programming but involves programming, such as tech artists in games, fx tds in film for example. I see similar patterns in the people I work where the best ones I have seen were self-taught, because of being artists primarily. But I also see that while the software, programming knowledge is varied and deep, hardware knowledge is very basic, including me, again due to lack of formal education. But I also work with a lot of programmers who possess both skills in general (software and hardware). Do you think it's necessary to have a formal education to have great programming skills? Would you think less of someone if he didn't have a degree in computer science, or software engineering, etc in terms of job opportunities? Would you trust him to do a software engineering job, i.e. writing a complex tool? Basically I feel the self-taught programmer doesn't know a lot of things, i.e. not knowing a particular pattern or a particular language, etc. But I find that the ability to think outside the box much more powerful. As "pure" programmers what's your take on it?

    Read the article

  • How to represent an agile project to people focused on waterfall [closed]

    - by ahsteele
    Our team has been asked to represent our development efforts in a project plan. No one is unhappy with our work or questioning our ability to deliver, we are just participating in an IT cattle call for project plans. Trouble is we are an agile team and haven't thought about our work in terms of a formal project plan. While we have a general idea of what we are working on next we aren't 100% sure until we plan an iteration. Until now our team has largely operated in a vacuum and has not been required to present our methodology or metrics to outside parties. We follow most of the practices espoused in Extreme Programming. We hold quarterly planning meetings to have a general idea of the stories we are going to work on for a quarter. That said, our stories are documented on 3x5 cards and are only estimated at the beginning of the iteration in which they are going to be worked. After estimation we document the story in Team Foundation Sever. During an iteration, we attach code to stories and mark stories as completed once finished. From this data we are able to generate burn down and velocity charts. Most importantly we know our average velocity for an iteration keeping us from biting off more than we can chew. I am not looking to modify the way we do development but want to present our development activities in a report that someone only familiar with waterfall will understand. In What Does an Agile Project Plan Look Like, Kent McDonald does a good job laying out the differences between agile and waterfall project plans. He specifies the differences in consumable bullets: An agile project plan is feature based An Agile Project Plan is organized into iterations An Agile Project Plan has different levels of detail depending on the time frame An Agile Project Plan is owned by the Team Being able to explain the differences is great, but how best to present the data?

    Read the article

  • Update

    - by Jeff Certain
    This blog has been pretty quiet for a year now. There's a few reasons for that. Probably the biggest reason is that I view this as a space where I talk about .NET things. Or software development. While I've been doing the latter for the past year, I haven't been doing the former.Yes, I took a trip to the dark side. I started with Ning 11 months ago, in Palo Alto, CA. I had the chance to work with an incredibly talented group of software engineers... in PHP and Java.That was definitely an eye-opening experience, in terms of technology, process, and culture. It was also a pretty good example of how acquisitions can get interesting. I'll talk more about this, I'm sure.Last week, I started with a company called Dynamic Signal. I'm a "Back End Engineer" now. Also a very talented team of people, and I'm delighted to be working with them. We're a Microsoft shop. After a year away, I'm very happy to be back. Coming back to .NET is an easy transition, and one that has me being fairly productive straight out of the gate.(Some of you may have noticed, my last post was more than a year ago. Yes, it's safe to infer that I didn't get renewed as an MVP. Fair deal; I didn't do nearly as much this year as I have in the past. I'll be starting to speak again shortly, and hope to be re-awarded soon.)At any rate, now that I'm back in the .NET space, you can expect to hear more from me soon!

    Read the article

  • Welcome to the newly merged JCP EC!

    - by Heather VanCura
    As part of the JCP.Next effort, the second JSR as part of the JCP program reforms, JSR 355, Executive Committee (EC) Merge, will take effect on Tuesday as JCP 2.9. The first in the effort was JSR 348, which took effect as JCP 2.8 in October 2011. EC members guide the evolution of the Java technologies by approving and voting on all technology proposals (Java Specification Requests, or JSRs). They are also responsible for defining the JCP's rules of governance and the legal agreement between members and the organization. They provide guidance to the Program Management Office (PMO) and they represent the interests of the JCP to the broader community. Starting on Tuesday, 13 November, JCP 2.9 is in effect, and the EC is merged from two ECs -- one representing Java SE/EE and one representing Java ME -- to one merged EC. IBM and Oracle each gave up one of their two seats (one per EC) and the terms expired for four members who did not run for re-election: AT&T, Deutsch Telekom, Siemens and Vodafone. All four remain JCP members. In addition, the seat occupied by RIM was forfeited due to lack of participation in October 2012. The JCP values the organizations and representatives for their contribution to the JCP EC, and looks forward to their continued participation in the JCP Program. The complete listing of the EC, 24 members total at the moment, is now available. We asked the two newcomers to the EC, Cinterion and CloudBees, and the re-elected London Java Community, to comment on their plans for their term in the EC. Read about their plans in the article published on JCP.org, "JCP 2.9 with a Merged EC Takes Effect 13 November". Also, plan to attend the public (open to all community members) EC Meeting planned for 20 November at 15:00 PST.  Details will be posted here and on the JCP.org home page next week.

    Read the article

  • Today's Links (6/28/2011)

    - by Bob Rhubart
    Connecting People, Processes, and Content: An Online Event | Brian Dirking Dirking shares information on an Oracle Online Forum coming up on July 19. Social Relationships don't count until they count | Steve Jones "It's actually the interactions that matter to back up the social experience rather than the existence of a social link," says Jones. ORACLENERD: KScope 11: Cary Millsap Commenting on Cary Millsap's KScope presentation on Agile, Oracle ACE Chet Justice says, "I fight with methodology on a daily basis, mostly resulting in me hitting my head against the closest wall." The Sage Kings of Antiquity | Richard Veryard "Given that the empirical evidence for enterprise architecture is fairly weak, anecdotal and inconclusive, we are still more dependent than we might like on the authority of experts," says Veryard, "whether this be semi-anonymous committees (such as TOGAF) or famous consultants (such as Zachman)." Oracle Business Intelligence Blog: New BI Mobile Demos "These are short videos that showcase some of the capabilities in our mobile app," says Abhinav Agarwal. "One focuses on the Oracle BI platform, while the other showcases what is possible with the mobile app accessing Oracle Business Intelligence Applications, like Financial Analytics." MySQL HA Events in the UK, Germany & France | Oracle's MySQL Blog Oracle is running MySQL High Availability breakfast seminars in London (June 29), Düsseldorf (July 13) and Paris (September 7). "During these free seminars, we will review the various options and technologies at your disposal to implement highly available and highly scalable MySQL infrastructures, as well as best practices in terms of architectures," says Bertrand Matthelié. VENNSTER BLOG: User Experience in Fusion apps "When I heard about the Fusion Applications User Experience efforts, I was skeptical," says Oracle ACE Director Lonneke Dikmans of Vennster "My view of Oracle and User Experience has changed drastically today." Power Your Cloud with Oracle Fusion Middleware Running in over 50 cities across the globe, this event is aimed at Architects, IT Managers, and technical leaders like you who are using Fusion Middleware or trying to learn more about middleware in the context of Cloud computing.

    Read the article

  • How do you compare job offers from companies in different countries?

    - by Danny Tuppeny
    This isn't really a programmer-specific question, but I'm not sure of a more appropriate place, and I think the users of this site are best able to answer the question in the context of programmers. Relocating to the US seems fairly common in the programming industry. I live in the UK, and maybe one day, I might do it too. So, if that day comes - how would you go about comparing job offers? Benefits are fairly easy to compare, but given the differences in cost of living, how would you go about comparing salaries and the quality of living you'll have? In a country where the cost of living is lower, you might be able to accept a lower salary (based on exchange rate) and still have the same quality of living. But what can you do to ensure this? In some cases, you may even take a "pay rise" in terms of exchange rate, but end up far worse off. How can you compare job offers across different countries to get an idea of the salary you would need in order to not feel you've gone "backwards"?

    Read the article

  • Oracle has some very helpful and free code...I think

    - by Casey
    I found that some of the code that Oracle uses is very useful so I don't have to re-invent the wheel. Given this is at the top of the file where the code in question is: /* * Copyright (c) 1997, 2006, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ If I leave the text intact, put it in my C++ header, and credit oracle for each method, and package the source into a static library...is it still a no-no?

    Read the article

  • JSON Support in Azure

    - by kaleidoscope
    Please find how we call JavaScript Object Notation in cloud applications. As we all know how client script is useful in web applications in terms of performance.           Same we can use JQuery in Asp.net using Cloud  computing which will  asynchronously pull any messages out of the table(cloud storage)  and display them in the     browser by invoking a method on a controller that returns JavaScript Object Notation (JSON) in a well-known shape. Syntax : Suppose we want to write a  JQuery function which return some notification while end user interact with our application so use following syntax : public JsonResult GetMessages() {      if (User.Identity.IsAuthenticated)      {     UserTextNotification[] userToasts =           toastRepository.GetNotifications(User.Identity.Name);          object[] data =          (from UserTextNotification toast in userToasts          select new { title = toast.Title ?? "Notification",          text = toast.MessageText }).ToArray();           return Json(data, JsonRequestBehavior.AllowGet);      }         else            return Json(null); } Above function is used to check authentication and display message if user is not exists in Table. Plateform :   ASP.NET 3.5  MVC 1   Under Visual Studio 2008  . Please find below link for more detail : http://msdn.microsoft.com/en-us/magazine/ee335721.aspx   Chandraprakash, S

    Read the article

  • Software Design Idea for multi tier architecture

    - by Preyash
    I am currently investigating multi tier architecture design for a web based application in MVC3. I already have an architecture but not sure if its the best I can do in terms of extendability and performance. The current architecure has following components DataTier (Contains EF POCO objects) DomainModel (Contains Domain related objects) Global (Among other common things it contains Repository objects for CRUD to DB) Business Layer (Business Logic and Interaction between Data and Client and CRUD using repository) Web(Client) (which talks to DomainModel and Business but also have its own ViewModels for Create and Edit Views for e.g.) Note: I am using ValueInjector for convering one type of entity to another. (which is proving an overhead in this desing. I really dont like over doing this.) My question is am I having too many tiers in the above architecure? Do I really need domain model? (I think I do when I exposes my Business Logic via WCF to external clients). What is happening is that for a simple database insert it (1) create ViewModel (2) Convert ViewModel to DomainModel for Business to understand (3) Business Convert it to DataModel for Repository and then data comes back in the same order. Few things to consider, I am not looking for a perfect architecure solution as it does not exits. I am looking for something that is scalable. It should resuable (for e.g. using design patterns ,interfaces, inheritance etc.) Each Layers should be easily testable. Any suggestions or comments is much appriciated. Thanks,

    Read the article

  • Remote Working & Relocation

    - by James Burgess
    Sorry if this question is a duplicate, I did some extensive searching and found nothing on quite the same topic (though a couple on partially-overlapping topics). Recently, whilst on holiday in Munich, Germany, I was taken aback by the sheer number of programming-related posts available in the city that I easily qualify for (both in terms of knowledge, and experience). The advertised working environments seemed good and the pay seemed to be at least as good as what I'd expect here in the UK. Probably 80% of the advertisements I saw on the underground were for IT-related jobs, and a good 60% of those I was easily qualified for. At the moment, I work as a freelancer mostly on web and small software projects, but seeing the vast availability of jobs in Munich versus my local area has me thinking about remote working. I'm unable to relocate for a job for the next 3 years (my wife has a contract to continue being a doctor at her current hospital for that time) but would almost certainly be open to it after that (after all, my wife and I both love Munich). In the meanwhile, I would be very interested in remote-working. So, my question is thus do companies ever take on remote workers (even with semi-frequent trips to the office) from abroad, with a view to later relocation? And, if so, how do you go about broaching the topic with a recruiter when getting in contact about a job posting? Language isn't a barrier for me, here, as 90% of the jobs I've looked up in Munich don't require German speakers (seems they have a big recruiting market abroad). I'm also under no illusions about the disadvantages of remote working, but I'm more interested in the viability of the scenario rather than the intricacies (at least at this point). I'd really appreciate any contributions, especially from those who have experience with working in such a scenario!

    Read the article

  • Deferred Rendering With Diffuse,Specular, and Normal maps

    - by John
    I have been reading up on deferred rendering and I am trying to implement a renderer using the Sponza atrium model, which can be found here, as my sandbox.Note I am also using OpenGL 3.3 and GLSL. I am loading the model from a Wavefront OBJ file using Assimp. I extract all geometry information including tangents and bitangents. For all the aiMaterials,I extract the following information which essentially comes from the sponza.mtl file. Ambient/Diffuse/Specular/Emissive Reflectivity Coefficients(Ka,Kd,Ks,Ke) Shininess Diffuse Map Specular Map Normal Map I understand that I must render vertex attributes such as position ,normals,texture coordinates to textures as well as depth for the second render pass. A lot of resources mention putting colour information into a g-buffer in the initial render pass but do you not require the diffuse,specular and normal maps and therefore lights to determine the fragment colour? I know that doesnt make since sense because lighting should be done in the second render pass. In terms of normal mapping, do you essentially just pass the tangent,bitangents, and normals into g-buffers and then construct the tangent matrix and apply it to the sampled normal from the normal map. Ultimately, I would like to know how to incorporate this material information into my deferred renderer.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >