Search Results

Search found 23808 results on 953 pages for 'call centre'.

Page 323/953 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • REGISTER NOW: FY13 LIVE Oracle PartnerNetwork Kickoff is June 26th/27th

    - by mseika
      REGISTER NOW: FY13 LIVE Oracle PartnerNetwork Kickoff is June 26th/27th Join us for a live online event hosted by the Oracle PartnerNetwork team as we kickoff FY13. Hear messages from Judson Althoff, Oracle's SVP of Worldwide Alliances & Channels, as well as other Oracle executives, thought leaders, and partners. During Partner Kickoff you will see: Judson Althoff on FY12 recap and FY13 call to action Executive Addresses from Mark Hurd, Thomas Kurian, John Fowler, and Regional Sales Executives Embed, Sell and Implement the Full Portfolio Business Opportunities for ISV / OEM’s, System Integrators, and Channel Partners Q&A with Regional Alliances & Channels Executives Please register for your regions Partner Kickoff at the appropriate link below: Region Date / Time NAS Tuesday, June 26 @ 8:30 am PT EMEA Tuesday, June 26 @ 2:00 pm BST LAD Tuesday, June 26 @ 2:00pm EDT (Miami) / 3:00pm BRT (Sao Paulo) JAPAN Wednesday, June 27 @ 10:00 am JST APAC Wednesday, June 27 @ 8:30 am IST (Bangalore) / 11:00 am SGT (Singapore)Wednesday, June 27 @ 1:00 pm AEST (Sydney) Be sure to follow us around the web to get the latest on OPN! We look forward to seeing you online,The Oracle PartnerNetwork Team

    Read the article

  • REGISTER NOW: FY13 LIVE Oracle PartnerNetwork Kickoff is June 26th/27th

    - by mseika
    REGISTER NOW: FY13 LIVE Oracle PartnerNetwork Kickoff is June 26th/27th Join us for a live online event hosted by the Oracle PartnerNetwork team as we kickoff FY13. Hear messages from Judson Althoff, Oracle's SVP of Worldwide Alliances & Channels, as well as other Oracle executives, thought leaders, and partners. During Partner Kickoff you will see: Judson Althoff on FY12 recap and FY13 call to action Executive Addresses from Mark Hurd, Thomas Kurian, John Fowler, and Regional Sales Executives Embed, Sell and Implement the Full Portfolio Business Opportunities for ISV / OEM’s, System Integrators, and Channel Partners Q&A with Regional Alliances & Channels Executives Please register for your regions Partner Kickoff at the appropriate link below: Region Date / Time NAS Tuesday, June 26 @ 8:30 am PT EMEA Tuesday, June 26 @ 2:00 pm BST LAD Tuesday, June 26 @ 2:00pm EDT (Miami) / 3:00pm BRT (Sao Paulo) JAPAN Wednesday, June 27 @ 10:00 am JST APAC Wednesday, June 27 @ 8:30 am IST (Bangalore) / 11:00 am SGT (Singapore)Wednesday, June 27 @ 1:00 pm AEST (Sydney) Be sure to follow us around the web to get the latest on OPN! We look forward to seeing you online,The Oracle PartnerNetwork Team

    Read the article

  • Breaking through the class sealing

    - by Jason Crease
    Do you understand 'sealing' in C#?  Somewhat?  Anyway, here's the lowdown. I've done this article from a C# perspective, but I've occasionally referenced .NET when appropriate. What is sealing a class? By sealing a class in C#, you ensure that you ensure that no class can be derived from that class.  You do this by simply adding the word 'sealed' to a class definition: public sealed class Dog {} Now writing something like " public sealed class Hamster: Dog {} " you'll get a compile error like this: 'Hamster: cannot derive from sealed type 'Dog' If you look in an IL disassembler, you'll see a definition like this: .class public auto ansi sealed beforefieldinit Dog extends [mscorlib]System.Object Note the addition of the word 'sealed'. What about sealing methods? You can also seal overriding methods.  By adding the word 'sealed', you ensure that the method cannot be overridden in a derived class.  Consider the following code: public class Dog : Mammal { public sealed override void Go() { } } public class Mammal { public virtual void Go() { } } In this code, the method 'Go' in Dog is sealed.  It cannot be overridden in a subclass.  Writing this would cause a compile error: public class Dachshund : Dog { public override void Go() { } } However, we can 'new' a method with the same name.  This is essentially a new method; distinct from the 'Go' in the subclass: public class Terrier : Dog { public new void Go() { } } Sealing properties? You can also seal seal properties.  You add 'sealed' to the property definition, like so: public sealed override string Name {     get { return m_Name; }     set { m_Name = value; } } In C#, you can only seal a property, not the underlying setters/getters.  This is because C# offers no override syntax for setters or getters.  However, in underlying IL you seal the setter and getter methods individually - a property is just metadata. Why bother sealing? There are a few traditional reasons to seal: Invariance. Other people may want to derive from your class, even though your implementation may make successful derivation near-impossible.  There may be twisted, hacky logic that could never be second-guessed by another developer.  By sealing your class, you're protecting them from wasting their time.  The CLR team has sealed most of the framework classes, and I assume they did this for this reason. Security.  By deriving from your type, an attacker may gain access to functionality that enables him to hack your system.  I consider this a very weak security precaution. Speed.  If a class is sealed, then .NET doesn't need to consult the virtual-function-call table to find the actual type, since it knows that no derived type can exist.  Therefore, it could emit a 'call' instead of 'callvirt' or at least optimise the machine code, thus producing a performance benefit.  But I've done trials, and have been unable to demonstrate this If you have an example, please share! All in all, I'm not convinced that sealing is interesting or important.  Anyway, moving-on... What is automatically sealed? Value types and structs.  If they were not always sealed, all sorts of things would go wrong.  For instance, structs are laid-out inline within a class.  But what if you assigned a substruct to a struct field of that class?  There may be too many fields to fit. Static classes.  Static classes exist in C# but not .NET.  The C# compiler compiles a static class into an 'abstract sealed' class.  So static classes are already sealed in C#. Enumerations.  The CLR does not track the types of enumerations - it treats them as simple value types.  Hence, polymorphism would not work. What cannot be sealed? Interfaces.  Interfaces exist to be implemented, so sealing to prevent implementation is dumb.  But what if you could prevent interfaces from being extended (i.e. ban declarations like "public interface IMyInterface : ISealedInterface")?  There is no good reason to seal an interface like this.  Sealing finalizes behaviour, but interfaces have no intrinsic behaviour to finalize Abstract classes.  In IL you can create an abstract sealed class.  But C# syntax for this already exists - declaring a class as a 'static', so it forces you to declare it as such. Non-override methods.  If a method isn't declared as override it cannot be overridden, so sealing would make no difference.  Note this is stated from a C# perspective - the words are opposite in IL.  In IL, you have four choices in total: no declaration (which actually seals the method), 'virtual' (called 'override' in C#), 'sealed virtual' ('sealed override' in C#) and 'newslot virtual' ('new virtual' or 'virtual' in C#, depending on whether the method already exists in a base class). Methods that implement interface methods.  Methods that implement an interface method must be virtual, so cannot be sealed. Fields.  A field cannot be overridden, only hidden (using the 'new' keyword in C#), so sealing would make no sense.

    Read the article

  • How do you visually represent programming skills?

    - by TomSchober
    I had a discussion with a recruiter recently that made me wish I could visually represent programming skills. In trying to explain how skills relate, what are the important properties of those skills? Would a tagging model work (i.e. "Design Pattern," "Programming Language," "IDE," or "VCS")? Are they really hierarchical? Clarification: The real problem I see is communicating the level of granularity among skill sets. For instance saying someone "knows Java" is a uselessly broad term in describing what someone can DO. However saying they know how to write web services with the Java Programming language is a bit better. To go even further, saying they know Spring as a tool under all that is probably specific enough. What should we call those levels of granularity? What are the relationships between the terms we use? i.e. Framework to Language, Tool to Language, Framework to Solution(like web services), etc.

    Read the article

  • Building Paths Fluently

    - by PSteele
    If you ever need to “build” a path (i.e. “C:\folder1\subfolder”), you really should be using Path.Combine.  It makes sure the trailing directory separator is in between the two paths (and it puts the appropriate character in – DirectorySeparatorChar).  I wanted an easier way to build a path than having to constantly call Path.Combine so I created a handy little extension method that lets me build paths “fluently”: public static string PathCombine(this string path1, string path2) { return Path.Combine(path1, path2); } Now I can write code like this: var dir = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments) .PathCombine("Folder1") .PathCombine("Folder2"); Technorati Tags: .NET,Extension Methods,Fluent

    Read the article

  • WebCast: WebCenter Portal, April 03rd, 2012

    - by rituchhibber
    Our next WebCenter Portal webcast will be on April 03rd, 2012. This WebCast will help you to prepare yourself for the WebCenter Portal Certified Implementation Specialist EXAM. Webcast Details: Date Topic Speaker Web Call Details Intercall Details  April 03rd                WebCenter Portal   Refresh     Course      Yannick Ongena, InfoMentumWebCenter Portal Specialized Partner    Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9729232/0304 For more details, please click here.

    Read the article

  • Any valid reason to Nest Master Pages in ASP.Net rather than Inherit?

    - by James P. Wright
    Currently in a debate at work and I cannot fathom why someone would intentionally avoid Inheritance with Master Pages. For reference here is the project setup: BaseProject MainMasterPage SomeEvent SiteProject SiteMasterPage nested MainMasterPage OtherSiteProject MainMasterPage (from BaseProject) The debate came up because some code in BaseProject needs to know about "SomeEvent". With the setup above, the code in BaseProject needs to call this.Master.Master. That same code in BaseProject also applies to OtherSiteProject which is just accessed as this.Master. SiteMasterPage has no code differences, only HTML differences. If SiteMasterPage Inherits MainMasterPage rather than Nests it, then all code is valid as this.Master. Can anyone think of a reason why to use a Nested Master Page here instead of an Inherited one?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Grant a user permissions on www-data owned /var/www

    - by George Pearce
    I have a simple web server setup for some websites, with a layout something like: site1: /var/www/site1/public_html/ site2: /var/www/site2/public_html/ I have previously used the root user to manage files, and then given them back to www-data when I was done (WordPress sites, needed for WP Uploads to work). This probably isn't the best way. I'm trying to find a way to create another user (lets call it user1) that has permission to edit files in site1, but not site2, and doesn't stop the files being 'owned' by www-data. Is there any way for me to do this?

    Read the article

  • Google Webmaster tools and geotargeting at TLD and Folder levels

    - by user3390918
    Hoping you can help us with this question: We are one Canada’s leading websites with a PR8. For privacy's sake, let's call our main URL www.OurMainWebsite.com As we are expanding globally, we are planning to build a web site to service the US market but want to keep the domain above as is from branding perspective. Questions: Can we keep: www.OurMainWebsite.com as the main Canada site and create a www.OurMainWebsite.com/US as the US site? Can those 2 URL's be geotargetted as per above, and wouldn't the fact that US is a subfolder of the main TLD through things off? can I target a TLD to one country and a subfolder under the TLD to abother? Thanks in advance for your help.

    Read the article

  • Sending a android.content.Context parameter to a function with JNI

    - by Ef Es
    I am trying to create a method that checks for internet connection that needs a Context parameter. The JNIHelper allows me to call static functions with parameters, but I don't know how to "retrieve" Cocos2d-x Activity class to use it as a parameter. public static boolean isNetworkAvailable(Context context) { boolean haveConnectedWifi = false; boolean haveConnectedMobile = false; ConnectivityManager cm = (ConnectivityManager) context.getSystemService( Context.CONNECTIVITY_SERVICE); NetworkInfo[] netInfo = cm.getAllNetworkInfo(); for (NetworkInfo ni : netInfo) { if (ni.getTypeName().equalsIgnoreCase("WIFI")) if (ni.isConnected()) haveConnectedWifi = true; if (ni.getTypeName().equalsIgnoreCase("MOBILE")) if (ni.isConnected()) haveConnectedMobile = true; } return haveConnectedWifi || haveConnectedMobile; } and the c++ code is JniMethodInfo methodInfo; if ( !JniHelper::getStaticMethodInfo( methodInfo, "my/app/TestApp", "isNetworkAvailable", "(android/content/Context;)V")) { //error return; } CCLog( "Method found and loaded!"); methodInfo.env->CallStaticVoidMethod( methodInfo.classID, methodInfo.methodID); methodInfo.env->DeleteLocalRef( methodInfo.classID);

    Read the article

  • How does an Engine like Source process entities?

    - by Júlio Souza
    [background information] On the Source engine (and it's antecessor, goldsrc, quake's) the game objects are divided on two types, world and entities. The world is the map geometry and the entities are players, particles, sounds, scores, etc (for the Source Engine). Every entity has a think function, which do all the logic for that entity. So, if everything that needs to be processed comes from a base class with the think function, the game engine could store everything on a list and, on every frame, loop through it and call that function. On a first look, this idea is reasonable, but it can take too much resources, if the game has a lot of entities.. [end of background information] So, how does a engine like Source take care (process, update, draw, etc) of the game objects?

    Read the article

  • How do you dive into large code bases?

    - by miku
    What tools and techniques do you use for exploring and learning an unknown code base? I am thinking of tools like grep, ctags, unit-tests, functional test, class-diagram generators, call graphs, code metrics like sloccount and so on. I'd be interested in your experiences, the helpers you used or wrote yourself and the size of the codebase, with which you worked with. I realize, that this is also a process (happening over time) and that learning can mean "can give a ten minute intro" to "can refactor and shrink this to 30% of the size". Let's leave that open for now.

    Read the article

  • 500 error on Joolma website

    - by Rachel Sparks
    PHP Fatal error: Call to a member function setQuery() on a non-object in /home/josh/public_html/administrator/components/com_jfusion/plugins/phpbb3/forum.php on line 226 Just moved over to a new server. Anyone have ideas as to what is wrong? Is this a database issue? line 226: //get permissions for all forums in case more than one module/plugin is present with different settings $db = & JFusionFactory::getDatabase($this->getJname()); $query = "SELECT forum_id FROM #__forums WHERE forum_type = 1 ORDER BY left_id"; $db->setQuery($query); //226 $forumids = $db->loadResultArray();

    Read the article

  • Should I post my PDF library for SEO? [closed]

    - by Iunknown
    Possible Duplicate: Do search engines crawl PDFs and if so are there any rules to follow when making them When a Sales call comes in, the caller often says something like: 'I searched for 3 days before finding your product and it's exactly what I need!' That's telling me that I need some SEO work. We redid our website and streamlined it which removed many of our 'How-To' documents. Since those PDF documents contain words that people might search for, I was wondering if I could add a 'Complete library' link to the bottom of a page that will load up the entire PDF library. Would that help my ranking?

    Read the article

  • Is it possible to make a MS-SQL Scalar function do this?

    - by Hokken
    I have a 3rd party application that can call a MS-SQL scalar function to return some summary information from a table. I was able to return the summary values with a table-valued function, but the application won't utilize table-valued functions, so I'm kind of stuck. Here's a sample from the table: trackingNumber, projTaskAward, expenditureType, amount 1122, 12345-67-89, Supplies, 100 1122, 12345-67-89, Supplies, 150 1122, 12345-67-89, Supplies, 250 1122, 12345-67-89, Misc, 50 1122, 12345-67-89, Misc, 100 1122, 98765-43-21, General, 200 1122, 98765-43-21, Conference, 500 1122, 98765-43-21, Misc, 300 1122, 98765-43-21, Misc, 100 1122, 98765-43-21, Misc, 100 I want to summarize the amounts by projTaskAward & expenditureType, based on the trackingNumber. Here is the output I'm looking for: Proj/Task/Award: 12345-67-89 Expenditure Type: Supplies Total: 500 Proj/Task/Award: 12345-67-89 Expenditure Type: Misc Total: 150 Proj/Task/Award: 98765-43-21 Expenditure Type: General Total: 200 Proj/Task/Award: 98765-43-21 Expenditure Type: Conference Total: 500 Proj/Task/Award: 98765-43-21 Expenditure Type: Misc Total: 500 I'd appreciate any help anyone can give in steering me in the right direction.

    Read the article

  • General visual effects to meshes/entities

    - by Pacha
    I am trying to add some visual effects to some entities, meshes, or whatever you want to call them as they are looking pretty dull in my game right now. What I want to achieve is this: http://youtu.be/zox8935PLw0?t=36s (the "texture" gets disintegrated and then goes back to normal, covering the whole mesh.) Also I would like to know what is the best way to add effects like the one in the video to my game (for example, thunder effects, shattering, etc.) I know that I can do some things with shaders, but I haven't learned them too well and I am still in a beginner level. I am using Ogre3D, and GLSL for shaders. Thanks! Note: this is a screen-shot of my game, I want to apply the effect in the video to my main character):

    Read the article

  • /etc/init.d Character Encoding Issue

    - by Ryan Rosario
    I have a script in /etc/init.d on an EC2 image that, on machine startup, pulls in source code via SVN, builds it, and then runs it using Ant. The source code is Java. Within this code is a call to the Weka library which writes a file to disk. On most Ubuntu AMIs, and my home machines' versions of Ubuntu, there is no issue. The problem is that with certain versions/AMIs of Ubuntu, Unicode characters in the file are replaced with question marks ('?'). If I run the job manually on the trouble instance, Unicode is output to file correctly, but not when run from /etc/init.d. What might be causing this problem and how can I fix it so that Unicode characters appear correctly in files written from /etc/init.d processes?

    Read the article

  • What naming anti-patterns exist?

    - by Billy ONeal
    There are some names, where if you find yourself reaching for those names, you know you've already messed something up. For example: XxxManager This is bad because a class should describe what the class does. If the most specific word you can come up with for what the class does is "manage," then the class is too big. What other naming anti-patterns exist? EDIT: To clarify, I'm not asking "what names are bad" -- that question is entirely subjective and there's no way to answer it. I'm asking, "what names indicate overall design problems with the system." That is, if you find yourself wanting to call a component Xyz, that probably indicates the component is ill concieved. Also note here that there are exceptions to every rule -- I'm just looking for warning flags for when I really need to stop and rethink a design.

    Read the article

  • Facing problem with "gtk.RESPONSE_OK" in the simple-player quickly tutorial

    - by sumit_gt
    I am fairly new to both quickly and Python. I am facing several problems while learning to use quickly from the following tutorial on the Ubuntu developers site: http://developer.ubuntu.com/resources/app-developer-cookbook/multimedia/creating-a-simple-media-player/ The following error I'm unable to understand: Traceback (most recent call last): File "/home/sumit/Sumit/simple-player/simple_player/SimplePlayerWindow.py", line 36, in on_openbutton_clicked if response==gtk.RESPONSE_OK: NameError: global name 'gtk' is not defined I realize that I am supposed to import something, so I tried to add import gtk which it didn't work and it gave the following error: from gtk import _gtk /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: g_type_get_qdata: assertion `node != NULL' failed from gtk import _gtk I have followed every step of the tutorials so far. But there is no mention of any other imports other that "prompts" and "os". Please help. Contribution of Agmenor, facing the same problem: I also tried to replace the text if response == gtk.RESPONSE_OK: by if response == Gtk.RESPONSE_OK: (notice the capital G). This gives another error: AttributeError: 'gi.repository.Gtk' object has no attribute 'RESPONSE_OK'

    Read the article

  • How can i make a Multiseat with Xephyr in ubuntu 11.10?

    - by Jnts
    i'm trying to make a multiseat with ubuntu, but a cant make this work. I've read a lot of "how to", and the most of then is about to do a multiseat with some distro with GDM2, or KDM. But, i'm using the lightdm of ubuntu. So now i'm trying to make this multiseat with Xephyr that i already used to make a multiseat with the Debian 4 version. But i dont know how to call Xephyr in lightdm.conf. Sorry for my totally crap english.

    Read the article

  • "Cannot import name genshi" error when installing the Swab library

    - by ATMathew
    I'm trying to install the Swab library for Python 2.6 in Ubuntu 10.10. However, I get the following error messages when I try to import it. In the terminal I ran: sudo easy_install swab sudo easy_install Genshi In the Python interpreter I ran: >>> import swab Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/swab-0.1.2-py2.6.egg/swab/__init__.py", line 23, in <module> from pestotools.genshi import genshi, render_docstring ImportError: cannot import name genshi I don't know whats going on. can anyone help.

    Read the article

  • Functional Adaptation

    - by Charles Courchaine
    In real life and OO programming we’re often faced with using adapters, DVI to VGA, 1/4” to 1/8” audio connections, 110V to 220V, wrapping an incompatible interface with a new one, and so on.  Where the adapter pattern is generally considered for interfaces and classes a similar technique can be applied to method signatures.  To be fair, this adaptation is generally used to reduce the number of parameters but I’m sure there are other clever possibilities to be had.  As Jan questioned in the last post, how can we use a common method to execute an action if the action has a differing number of parameters, going back to the greeting example it was suggested having an AddName method that takes a first and last name as parameters.  This is exactly what we’ll address in this post. Let’s set the stage with some review and some code changes.  First, our method that handles the setup/tear-down infrastructure for our WCF service: 1: private static TResult ExecuteGreetingFunc<TResult>(Func<IGreeting, TResult> theGreetingFunc) 2: { 3: IGreeting aGreetingService = null; 4: try 5: { 6: aGreetingService = GetGreetingChannel(); 7: return theGreetingFunc(aGreetingService); 8: } 9: finally 10: { 11: CloseWCFChannel((IChannel)aGreetingService); 12: } 13: } Our original AddName method: 1: private static string AddName(string theName) 2: { 3: return ExecuteGreetingFunc<string>(theGreetingService => theGreetingService.AddName(theName)); 4: } Our new AddName method: 1: private static int AddName(string firstName, string lastName) 2: { 3: return ExecuteGreetingFunc<int>(theGreetingService => theGreetingService.AddName(firstName, lastName)); 4: } Let’s change the AddName method, just a little bit more for this example and have it take the greeting service as a parameter. 1: private static int AddName(IGreeting greetingService, string firstName, string lastName) 2: { 3: return greetingService.AddName(firstName, lastName); 4: } The new signature of AddName using the Func delegate is now Func<IGreeting, string, string, int>, which can’t be used with ExecuteGreetingFunc as is because it expects Func<IGreeting, TResult>.  Somehow we have to eliminate the two string parameters before we can use this with our existing method.  This is where we need to adapt AddName to match what ExecuteGreetingFunc expects, and we’ll do so in the following progression. 1: Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 2: Func<IGreeting, string, int> -> Func<IGreeting, int>   For the first step, we’ll create a method using the lambda syntax that will “eliminate” the last name parameter: 1: string lastNameToAdd = "Smith"; 2: //Func<IGreeting, string, string, int> -> Func<IGreeting, string, int> 3: Func<IGreeting, string, int> addName = (greetingService, firstName) => AddName(greetingService, firstName, lastNameToAdd); The new addName method gets us one step close to the signature we need.  Let’s say we’re going to call this in a loop to add several names, we’ll take the final step from Func<IGreeting, string, int> -> Func<IGreeting, int> in line as a lambda passed to ExecuteGreetingFunc like so: 1: List<string> firstNames = new List<string>() { "Bob", "John" }; 2: int aID; 3: foreach (string firstName in firstNames) 4: { 5: //Func<IGreeting, string, int> -> Func<IGreeting, int> 6: aID = ExecuteGreetingFunc<int>(greetingService => addName(greetingService, firstName)); 7: Console.WriteLine(GetGreeting(aID)); 8: } If for some reason you needed to break out the lambda on line 6 you could replace it with 1: aID = ExecuteGreetingFunc<int>(ApplyAddName(addName, firstName)); and use this method: 1: private static Func<IGreeting, int> ApplyAddName(Func<IGreeting, string, int> addName, string lastName) 2: { 3: return greetingService => addName(greetingService, lastName); 4: } Splitting out a lambda into its own method is useful both in this style of coding as well as LINQ queries to improve the debugging experience.  It is not strictly necessary to break apart the steps & functions as was shown above; the lambda in line 6 (of the foreach example) could include both the last name and first name instead of being composed of two functions.  The process demonstrated above is one of partially applying functions, this could have also been done with Currying (also see Dustin Campbell’s excellent post on Currying for the canonical curried add example).  Matthew Podwysocki also has some good posts explaining both Currying and partial application and a follow up post that further clarifies the difference between Currying and partial application.  In either technique the ultimate goal is to reduce the number of parameters passed to a function.  Currying makes it a single parameter passed at each step, where partial application allows one to use multiple parameters at a time as we’ve done here.  This technique isn’t for everyone or every problem, but can be extremely handy when you need to adapt a call to something you don’t control.

    Read the article

  • Unmet Dependencies with kdelibs5-data

    - by Jitesh
    I was trying to install Amarok 1.4 on Ubuntu 12.04 (Gnome-classic), by following this instructions. Problem started after giving these two commands dpkg -i kdelibs5-data_4.6.2-0ubuntu4_all.deb dpkg -i kdelibs-data_3.5.10.dfsg.1-5ubuntu2_all.deb Now, immediately after these commands, Ubuntu Updater popped up and gave me an error that the package catalog is broken and needs to be repaired. Nothing can be installed or removed till then. It also offered a suggestion to run apt-get install -f. I tried that, but again got the same error.Also tried apt-get clean followed by apt-get install -f. Again got the following output: jitesh@jitesh-desktop:~$ sudo apt-get clean jitesh@jitesh-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: kdelibs5-data The following packages will be upgraded: kdelibs5-data 1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded. 2 not fully installed or removed. Need to get 2,832 kB of archives. After this operation, 2,998 kB disk space will be freed. Do you want to continue [Y/n]? y Get:1 http: //in.archive.ubuntu.com/ubuntu/precise-updates/main kdelibs5-data all 4:4.8.4a-0ubuntu0.2 [2,832 kB] Fetched 2,832 kB in 32s (86.6 kB/s) dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. No apport report written because MaxReports is reached already dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: kdelibs5-data kdelibs-data W: Duplicate sources.list entry http://archive.canonical.com/ubuntu/ precise/partner i386 Packages (/var/lib/apt/lists/archive.canonical.com_ubuntu_dists_precise_partner_binary-i386_Packages) W: You may want to run apt-get update to correct these problems E: Sub-process /usr/bin/dpkg returned an error code (1) As I thought the error was related to configuring kdelibs, I tried to configure using dpkg. But got the following errors: jitesh@jitesh-desktop:~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of kdelibs5-data: libplasma3 (4:4.8.4a-0ubuntu0.2) breaks kdelibs5-data (<< 4:4.6.80~) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. kate-data (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. katepart (4:4.8.4-0ubuntu0.1) breaks kdelibs5-data (<< 4:4.6.90) and is installed. Version of kdelibs5-data to be configured is 4:4.6.2-0ubuntu4. dpkg: error processing kdelibs5-data (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of kdelibs-data: kdelibs-data depends on kdelibs5-data; however: Package kdelibs5-data is not configured yet. dpkg: error processing kdelibs-data (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: kdelibs5-data kdelibs-data jitesh@jitesh-desktop:~$ Now I dont have any idea how to proceed. I am unable to install anything from Software Centre or using Terminal now. Some basic info: Core2Duo, dual booting Ubuntu 12.04 with Win7. Fresh install of Ubuntu 12.04 (not upgrade). Incidentally, I had first upgraded from 10.04 and had succesfully installed Amarok 1.4 following this same method. But due to other issues, i had to format and do a clean install of 12.04. Now when I tried to install Amarok 1.4, I'm getting these errors. I also have digiKam and k3b installed, if that can be of any help. I use digiKam a lot, so removing KDE is not feasible for me. Any help on this issue will be highly appreciated. Thanks in advance.

    Read the article

  • Simple Little Registry Editor - New Release

    - by Bruce Eitman
    I have posted a new release of the Simple Little Registry Editor found in Windows CE: Simple Little Registry Editor.  This release fixes a problem with writing DWORD values when the most significant bit is set.  The application uses RegistryKey.SetValue.  There seems to be a problem with how the .NET CompactFramework (and the full framework) handle the second argument during the call which causes an exception. So the following does not work: RegistryKey.SetValue( "TestValue", 0xFFFFFFFF, RegistryValueKind.DWord ); But, this does: RegistryKey.SetValue( "TestValue",unchecked((int) 0xFFFFFFFF), RegistryValueKind.DWord ); Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >