Search Results

Search found 11015 results on 441 pages for 'explain plan'.

Page 318/441 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Identifying which pattern fits better.

    - by Daniel Grillo
    I'm developing a software to program a device. I have some commands like Reset, Read_Version, Read_memory, Write_memory, Erase_memory. Reset and Read_Version are fixed. They don't need parameters. Read_memory and Erase_memory need the same parameters that are Length and Address. Write_memory needs Lenght, Address and Data. For each command, I have the same steps in sequence, that are something like this sendCommand, waitForResponse, treatResponse. I'm having difficulty to identify which pattern should I use. Factory, Template Method, Strategy or other pattern. Edit I'll try to explain better taking in count the given comments and answers. I've already done this software and now I'm trying to refactoring it. I'm trying to use patterns, even if it is not necessary because I'm taking advantage of this little software to learn about some patterns. Despite I think that one (or more) pattern fits here and it could improve my code. When I want to read version of the software of my device, I don't have to assembly the command with parameters. It is fixed. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To read a portion of the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len and Address. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To write a portion in the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len, Address and Data. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. I think that I could use Template Method because I have almost the same algorithm for all. But the problem is some commands are fixes, others have 2 or 3 parameters. I think that parameters should be passed on the constructor of the class. But each class will have a constructor overriding the abstract class constructor. Is this a problem for the template method? Should I use other pattern?

    Read the article

  • Internet of Things Becoming Reality

    - by kristin.jellison
    The Internet of Things is not just on the radar—it’s becoming a reality. A globally connected continuum of devices and objects will unleash untold possibilities for businesses and the people they touch. But the “things” are only a small part of a much larger, integrated architecture. A great example of this comes from the healthcare industry. Imagine an expectant mother who needs to watch her blood pressure. She lives in a mountain village 100 miles away from medical attention. Luckily, she can use a small “wearable” device to monitor her status and wirelessly transmit the information to a healthcare hub in her village. Now, say the healthcare hub identifies that the expectant mother’s blood pressure is dangerously high. It sends a real-time alert to the patient’s wearable device, advising her to contact her doctor. It also pushes an alert with the patient’s historical data to the doctor’s tablet PC. He inserts a smart security card into the tablet to verify his identity. This ensures that only the right people have access to the patient’s data. Then, comparing the new data with the patient’s medical history, the doctor decides she needs urgent medical attention. GPS tracking devices on ambulances in the field identify and dispatch the closest one available. An alert also goes to the closest hospital with the necessary facilities. It sends real-time information on her condition directly from the ambulance. So when she arrives, they already have a treatment plan in place to ensure she gets the right care. The Internet of Things makes a huge difference for the patient. She receives personalized and responsive healthcare. But this technology also helps the businesses involved. The healthcare provider achieves a competitive advantage in its services. The hospital benefits from cost savings through more accurate treatment and better application of services. All of this, in turn, translates into savings on insurance claims. This is an ideal scenario for the Internet of Things—when all the devices integrate easily and when the relevant organizations have all the right systems in place. But in reality, that can be difficult to achieve. Core design principles are required to make the whole system work. Open standards allow these systems to talk to each other. Integrated security protects personal, financial, commercial and regulatory information. A reliable and highly available systems infrastructure is necessary to keep these systems running 24/7. If this system were just made up of separate components, it would be prohibitively complex and expensive for almost any organization. The solution is integration, and Oracle is leading the way. We’re developing converged solutions, not just from device to datacenter, but across devices, utilizing the Java platform, and through data acquisition and management, integration, analytics, security and decision-making. The Internet of Things (IoT) requires the predictable action and interaction of a potentially endless number of components. It’s in that convergence that the true value of the Internet of Things emerges. Partners who take the comprehensive view and choose to engage with the Internet of Things as a fully integrated platform stand to gain the most from the Internet of Things’ many opportunities. To discover what else Oracle is doing to connect the world, read about Oracle’s Internet of Things Platform. Learn how you can get involved as a partner by checking out the Oracle Java Knowledge Zone. Best regards, David Hicks

    Read the article

  • How to make a queue switches from FIFO mode to priority mode?

    - by enzom83
    I would like to implement a queue capable of operating both in the FIFO mode and in the priority mode. This is a message queue, and the priority is first of all based on the message type: for example, if the messages of A type have higher priority than the messages of the B type, as a consequence all messages of A type are dequeued first, and finally the messages of B type are dequeued. Priority mode: my idea consists of using multiple queues, one for each type of message; in this way, I can manage a priority based on the message type: just take first the messages from the queue at a higher priority and progressively from lower priority queues. FIFO mode: how to handle FIFO mode using multiple queues? In other words, the user does not see multiple queues, but it uses the queue as if it were a single queue, so that the messages leave the queue in the order they arrive when the priority mode is disabled. In order to achieve this second goal I have thought to use a further queue to manage the order of arrival of the types of messages: let me explain better with the following code snippet. int NUMBER_OF_MESSAGE_TYPES = 4; int CAPACITY = 50; Queue[] internalQueues = new Queue[NUMBER_OF_MESSAGE_TYPES]; Queue<int> queueIndexes = new Queue<int>(CAPACITY); void Enqueue(object message) { int index = ... // the destination queue (ie its index) is chosen according to the type of message. internalQueues[index].Enqueue(message); queueIndexes.Enqueue(index); } object Dequeue() { if (fifo_mode_enabled) { // What is the next type that has been enqueued? int index = queueIndexes.Dequeue(); return internalQueues[index].Dequeue(); } if (priority_mode_enabled) { for(int i=0; i < NUMBER_OF_MESSAGE_TYPES; i++) { int currentQueueIndex = i; if (!internalQueues[currentQueueIndex].IsEmpty()) { object result = internalQueues[currentQueueIndex].Dequeue(); // The following statement is fundamental to a subsequent switching // from priority mode to FIFO mode: the messages that have not been // dequeued (since they had lower priority) remain in the order in // which they were queued. queueIndexes.RemoveFirstOccurrence(currentQueueIndex); return result; } } } } What do you think about this idea? Are there better or more simple implementations?

    Read the article

  • Planning development when academic research is involved

    - by Another Anonymous User
    Dear fellow programmers, how do you do "software planning" when academic research is involved? And, on a side note, how do you convince your boss that writing software is not like building a house and it's more like writing a novel? The gory details are below. I am in charge of a small dev team working in a research lab. We started developing a software with the purpose of going public one day (i.e. sell and make money off that). Such software depends on, amongst other things, at least two independent research lines: that is, there are at least two Ph.D. candidates that will, hopefully, one day come out with a working implementation of what we need. The main software depends also on other, more concrete resources that we as developers can take care of: graphics rendering, soft bodies deformation, etc. My boss asked me to write the specifications, requirements AND a bloody GANTT chart of the entire project. Faced with the fact that I don't have a clue about the research part, and that such research is fundamental for the software, he said "make assumptions." For the clarity of the argument, he is a professor whose Ph.D. students should come up with the research we need. And he comes from a strictly engineering background: plan everything first, write down specifications and only then write down code that "it's the last part". What I am doing now: I broke down the product in features; each 'feature' is, de facto, a separate product; Each feature is built on top of the previous one; Once a feature (A) has a working prototype the team can start working on the next feature (B), while QA for is being done for A (if money allows, more people can be brought in, etc.); Features that depend on research will come last: by then, hopefully, the research part will be completed (when is still a big question) ; Also, I set the team to use SCRUM for the development of 'version 1.0', due in a few months. This deadline could be set based on reasonable assumptions: we listed all required features, we counted our availability, and we gave a reasonable estimate. So my questions, again, are: How do I make my boss happy while at the same time get something out the door? How do I write specifications for something we -the developers- have no clue whether it's possible to do or not? (We still haven't decided which libraries to use for some tasks; we'll do so when we'll need to) How do I get the requirements for that, given that there are yet no clients nor investors, just lots of interests and promises? How do I get peace in the world? I am sure at least one of my questions will be answered :) ps: I am writing this anonymously since a potential investor might backfire if this is discovered. Hope you'll understand. However I must say I do not like this mentality of 'hiding the truth': this program will likely benefit many, and not being able to talk openly about this (with my name and my reputation attached) feels like censorship. But alas, I care more about your suggestions now.

    Read the article

  • Moving abroad - Relocation advice

    - by Tim Koekkoek
    Oracle offers graduates from different European countries the opportunity to start their career abroad. Some already have experience with living abroad as they have done an exchange semester or internship in another county, for others it is the first time they will move abroad. Rui started in October 2011 as a Business Development Consultant in Dublin and moved from Portugal to Dublin, Ireland to start his career. For those planning to leave their home country and who desire to work abroad, he will share some tips and tricks in this article. When you’re faced with an opportunity like this, there are lots of things that will come to your mind. Sometimes it can be either very exciting, or even stressful. 1. First of all, try to relax. If you are certain you are moving abroad, all you need to do is some research about the country where you’re going to live, get to know its culture (gastronomy, important dates and events, its economy and effective ways to keep you in touch with your family and friends – such as mobile companies and Internet services), and start to understand the best locations (with good access) you could/should live in are. Don’t forget that initially you can be limited by transport and therefore it is important to explore the ideal places for you. During this time, Oracle provides everything you’ll need (papers, documents, etc.) to cross borders. 2. When you arrive, you understand that you are in a new country, in a new place, where all things (or most) are unknown to you. Before you panic, try to see it as a new challenge where new opportunities will come. Sometimes, it’s not easy I know, but the very best a new place has to give to you, is the opportunity to understand a new culture, get to know other people, other ways of working, and grow both as a person and professionally. So, you have nothing to lose in this kind of experiment. 3. When you arrive at Oracle, there’s a fantastic team that will help you with settling in, HR, Payroll, Relocation, IT. In my case, Oracle helped me with the relocation, they supported me to arrange everything such as helping out with all the paperwork and finding a new apartment. As you can see they will do their best to help you to be successful! 4. Engage with your new co-workers. Going to a place where you don’t know anyone can be tough sometimes but see it as an opportunity to meet people from all over the world and share experiences. Embrace it. 5. Plan ahead, try to get the most information possible and use it. Oracle is a multinational enterprise that will allow you to get to know a new labour market and give you the flexibility you need to understand your view of employment and occupation, giving you the very best opportunities to join different teams and working areas, so that you can work where you fit best. Good luck! If you’re thinking about starting a career abroad, read the following article: http://www.overseasdigest.com/movingtips.htm it can be very useful to you. Interested in starting your career at Oracle like Rui has? Please have a look at https://campus.oracle.com for all of our latest vacancies.

    Read the article

  • Problem Solving vs. Solution Finding

    - by ryanabr
    By enlarge, most developers fall into these two camps I will try to explain what I mean by way of example. A manager gives the developer a task that is communicated like this: “Figure out why control A is not loading on this form”. Now, right there it could be argued that the manager should probably have given better direction and said something more like: “Control A is not loading on the Form, fix it”. They might sound like the same thing to most people, but the first statement will have the developer problem solving the reason why it is failing. The second statement should have the developer looking for the solution to make it work, not focus on why it is broken. In the end, they might be the same thing, but I usually see the first approach take way longer than the second approach. The Problem Solver: The problem solver’s approach to fixing something that is broken is likely to take the error or behavior that is being observed and start to research it using a tool like Google, or any other search engine. 7/10 times this will yield results for the most common of issues. The challenge is in the other 30% of issues that will take the problem solver down the rabbit hole and cause them not to surface for days on end while every avenue is explored for the cause of the problem. In the end, they will probably find the cause of the issue and resolve it, but the cost can be days, or weeks of work. The Solution Finder: The solution finder’s approach to a problem will begin the same way the Problem Solver’s approach will. The difference comes in the more difficult cases. Rather than stick to the pure “This has to work so I am going to work with it until it does” approach, the Solution Finder will look for other ways to get the requirements satisfied that may or may not be using the original approach. For example. there are two area of an application of externally equivalent features, meaning that from a user’s perspective, the behavior is the same. So, say that for whatever reason, area A is now not working, but area B is working. The Problem Solver will dig in to see why area A is broken, where the Solution Finder will investigate to see what is the difference between the two areas and solve the problem by potentially working around it. The other notable difference between the two types of developers described is what point they reach before they re-emerge from their task. The problem solver will likely emerge with a triumphant “I have found the problem” where as the Solution Finder will emerge with the more useful “I have the solution”. Conclusion At the end of the day, users are what drives features in software development. With out users there is no need for software. In todays world of software development with so many tools to use, and generally tight schedules I believe that a work around to a problem that takes 8 hours vs. the more pure solution to the problem that takes 40 hours is a more fruitful approach.

    Read the article

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • Documenting sp_ssiscatalog

    - by jamiet
    What is the best way to document an API? Moreover, what is the best way to document a T-SQL API? Before I try to answer those questions I should explain what I mean by “a T-SQL API”. I think of an API as being a collection of well-defined, known, code modules that provide some notion of a service to whomever uses it; in T-SQL terms I tend to think of a collection of stored procedures and functions as a form of API. Its a loose definition, I admit, and in SQL Server circles we don’t tend to think of stored procedures collectively as an API but if you think about it that’s exactly what they are. The question of how to document a T-SQL API came to my mind as I worked on sp_ssiscatalog. How could I make it easy for people to learn about the capabilities of sp_ssiscatalog without forcing them to dig through the code and find out for themselves? My opening gambit was to write documentation pages on the wiki at http://ssisreportingpack.codeplex.com. That’s kinda useful but it does suffer the disadvantage that someone using sp_ssiscatalog needs to go visit a webpage to read it – I want the documentation to be available wherever the user is using sp_ssiscatalog. Moreover, maintaining the wiki is a real PITA. Intellisense works up to a point, I guess: but that only shows whatever SQL Server knows about the various parameters, which isn’t all that much! I wanted a better way for my API users to learn about its capabilities and so I hit upon the idea of simply using PRINT statements within the code itself to inform the user what options are available; hence I added such PRINT statements in the latest check-in. Now when you execute (for example): EXEC sp_ssiscatalog @operation_type='execs' you can hit F6 a few times to view the messages pane and you shall see something like this: Notice that I’m returning information about all the parameters that can be used to affect the results that just got returned. I really do think this will be very useful to anyone using sp_ssiscatalog; I myself am always forgetting what the parameters are and I wrote the damn thing so I can’t really expect anyone else to remember them. I have not yet made available a release that has these changes in it but when I do I’ll blog about it right here. At the time of writing the latest available release of sp_ssiscatalog is DB v1.0.1.0 but if you want to the latest and greatest simply download it straight from source. Feedback is welcome as always. @Jamiet

    Read the article

  • Inherit one instance variable from the global scope

    - by Julian
    I'm using Curses to create a command line GUI with Ruby. Everything's going well, but I have hit a slight snag. I don't think Curses knowledge (esoteric to be fair) is required to answer this question, just Ruby concepts such as objects and inheritance. I'm going to explain my problem now, but if I'm banging on, just look at the example below. Basically, every Window instance needs to have .close called on it in order to close it. Some Window instances have other Windows associated with it. When closing a Window instance, I want to be able to close all of the other Window instances associated with it at the same time. Because associated Windows are generated in a logical fashion, (I append the name with a number: instance_variable_set(self + integer, Window.new(10,10,10,10)) ), it's easy to target generated windows, because methods can anticipate what assosiated windows will be called, (I can recreate the instance variable name from scratch, and almost query it: instance_variable_get(self + integer). I have a delete method that handles this. If the delete method is just a normal, global method (called like this: delete_window(@win543) then everything works perfectly. However, if the delete method is an instance method, which it needs to be in-order to use the self keyword, it doesn't work for a very clear reason; it can 'query' the correct instance variable perfectly well (instance_variable_get(self + integer)), however, because it's an instance method, the global instances aren't scoped to it! Now, one way around this would obviously be to simply make a global method like this: delete_window(@win543). But I have attributes associated with my window instances, and it all works very elegantly. This is very simplified, but it literally translates the problem exactly: class Dog def speak woof end end def woof if @dog_generic == nil puts "@dog_generic isn't scoped when .woof is called from a class method!\n" else puts "@dog_generic is scoped when .woof is called from the global scope. See:\n" + @dog_generic end end @dog_generic = "Woof!" lassie = Dog.new lassie.speak #=> @dog_generic isn't scoped when .woof is called from an instance method!\n woof #=> @dog_generic is scoped when .woof is called from the global scope. See:\nWoof! TL/DR: I need lassie.speak to return this string: "@dog_generic is scoped when .woof is called from the global scope. See:\nWoof!" @dog_generic must remain as an insance variable. The use of Globals or Constants is not acceptable. Could woof inherit from the Global scope? Maybe some sort of keyword: def woof < global # This 'code' is just to conceptualise what I want to do, don't take offence! end Is there some way the .woof method could 'pull in' @dog_generic from the global scope? Will @dog_generic have to be passed in as a parameter?

    Read the article

  • Draw images with warped triangles on a web server [migrated]

    - by epologee
    The scenario The Flash front end of my current project produces images that a web server needs to combine into a video. Both frame-rate and frame-resolution are sizeable enough that sending an image sequence to the back end is not feasible (in both time and client bandwidth). Instead, we're trying to recreate the image drawing on the back end as well. Correct and slow, or incorrect and fast The problem is that this involves quite a bit of drawing textured triangles, and two solutions we found in Python (here and there) are so inefficient, that the drawing takes about 60 seconds per frame, resulting in a whopping 7,5 hours of processing time for a 30 second clip. Unacceptable. When using a PHP-module to send commands to ImageMagick for image manipulation, the whole process is super fast (tenths of a second per frame), but ImageMagick seems to be unable to draw triangles the way we do it in the front end, so the final results do not match. Unacceptable. What I'm asking here, is if there's someone who would know a way to solve this issue, by any means necessary that would run on a web server. Warping an image Let me explain the process of the front end: Perform a Delaunay calculation on points in an image to get an evenly distributed mesh of triangles. Offset the points/vertices in the mesh, distorting or warping the image. Draw the warped triangles on a new bitmap. We can send the results (coordinates) of steps 1 and 2 to the back end, to then draw the warped triangles and save it to an image on disk (or append as a frame to the video). But that last step is what I need help with. The Question Is there an alternative to ImageMagick that can draw triangles in a bitmap? Is there some other library, like a C library, that would allow us to do this? Or could we achieve this effect more easily by switching back end technologies, like Ruby? (.Net and Java are, unfortunately, not really options right now) Many thanks. EP. P.S. I'd appreciate re-tagging efforts, I don't quite know what labels to put on this question. Thanks!

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • How to specialize a c++ variadic template?

    - by Serge
    I'm trying to understand c++ variadic templates. I'm not much aware of the correct language to use to explain what I'd like to achieve, so it might be simpler if I provide a bit of code which illustrate what I'd like to achieve. #include <iostream> #include <string> using namespace std; template<int ...A> class TestTemplate1 { public: string getString() { return "Normal"; } }; template<int T, int ...A> string TestTemplate1<2, A...>::getString() { return "Specialized"; } template<typename ...A> class TestTemplate2 { }; int main() { TestTemplate1<1, 2, 3, 4> t1_1; TestTemplate1<1, 2> t1_2; TestTemplate1<> t1_3; TestTemplate1<2> t1_4; TestTemplate2<> t2_1; TestTemplate2<int, double> t2_2; cout << t1_1.getString() << endl; cout << t1_2.getString() << endl; cout << t1_3.getString() << endl; cout << t1_4.getString() << endl; } This throws several errors. error C2333: 'TestTemplate1<::getString' : error in function declaration; skipping function body error C2333: 'TestTemplate1<1,2,3,4::getString' : error in function declaration; skipping function body error C2333: 'TestTemplate1<1,2::getString' : error in function declaration; skipping function body error C2333: 'TestTemplate1<2::getString' : error in function declaration; skipping function body error C2977: 'TestTemplate1' : too many template arguments error C2995: 'std::string TestTemplate1::getString(void)' : function template has already been defined error C3860: template argument list following class template name must list parameters in the order used in template parameter list How can I have a specialized behavior for every TestTemplate1<2, ...> instances like t1_4?

    Read the article

  • JTF Tranlsation Festival 2011

    - by user13133135
    ?????????????????????? (MT) ??????????????????????? JTF ????????????????????????????????????????????????? ???5??!???21?JTF???????? ? ??:2011?11?29?(?)9:30~20:30(??9:00) ? ??:??????????(????)?(??) ? ??:(?)?????? ??:JTF?????????? ? http://www.jtf.jp/jp/festival/festival_top.html ????????????????????????????????MT ????????????????????????????????????????????????? 90 ???!??(!?)?????????????????????????????????????????????????????????????????????????????? ????????????????? http://www.jtf.jp/jp/festival/festival_program.html#koen_04 ?????????????????????????????? English:  It's been a while since the last post... I have been working on machine translation (MT) and post editing (PE) for Japanese.  Last year was my first step in MT+PE area, and I would take this year as an advanced step.  I plan to talk over Post editing 2011 (Advanced Step) on November 27 at JTF Translation Festival.  ?5 days before application due? 21st JTF Translation Festival ? Date:Nov 29, 2011 Tuesday 9:30~20:30(Gate open: 9:00) ? Place:Arcadia Ichigaya Tokyo ? http://www.jtf.jp/jp/festival/festival_top.html In this session, I would like to expand the thought on "how to best utilize MT and PE" either from the view of Client and Translator.  I will show some examples of post editing as a guideline to know what is the best way and most effective way to do post-edit for Japanese.  Also, I will discuss what is the best practice for MT users (Client). The session lasts 90 minutes... sound a little long for me, but I want to spend more time for discussion than last year.  It would be great to exchange thought or experiences about MT and PE.  What is your concerns or problems in the daily work with MT ?  If you have some, please bring them to my session at JTF Translation Festival.  Here is my session details (Japanese): http://www.jtf.jp/jp/festival/festival_program.html#koen_04 Here is the outline of my session: What is the advantage of MT ? Does it solve all the problems about cost, resource, and quality ?  Well, it is not a magic.  So, you cannot expect all at once.  When you have a problem, there are 3 options... 1. Be patient and wait until everything is ready, 2. Run a workaround using anything available now, 3. Find out something completely new and spend time and money. This time, I will focus Option 2 - do something with what we already have.  That is, I will discuss how we can best utilize MT in our daily business.  My view is two ways: From Client point of view, and From Translator point of view Looking forward to meeting many people and exchanging thoughts and information!

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Cannot get ATI Drivers installed

    - by bittoast67
    I am trying to install the Catalyst driver. The best I can get is a strange resolution problem and firefox acts all wonkt. The worst I have gotten is low graphics mode in which I just reinstall Ubuntu. I have a HP Pavilion Dv7 laptop. With Radeon 3200 HD. I plan to try again with a fresh install of Ubuntu 12.4.3 as I have heard its the most compatible. This is what I have done: I have tried just the easy way of going to synaptic and installing the drivers that way. the fglrx package (not the fglrx update). And if memory serves I think that boots me into low graphics mode. So, fresh install of Ubuntu and tried again. I have done everything a couple times from this site (http://wiki.cchtml.com/index.php/Ubuntu_Precise_Installation_Guide) following every instruction to a T. That gets me something, such as a lowered fan speed and a much cooler computer, but I also lose most of my resolution. And displays says its the best resolution I can get. I also have a very screwy firefox. Using this method I can see AMD Catalyst Control Center in my dash (two of them really one administrator and one not) but when I try to open it it says no amd driver detected. So again, ubuntu reinstall. I have tried the GUI method from the Legacy driver I got from AMD's site. It runs through smoothly and at the very end after I exit the installer it gives me an error. I have also tried various other methods using terminal, as well as various different drivers (the one from the amd's site and the one suggested in the above link for my graphics card) both to no avail. When I try the method in the link on number 2, and I get the super low res and screwy fire fox. I type in, fglrxinfo ,and get a badrequest error. I have yet to type in fglrxinfo and get anything like what I am supposed to. UPDATE: I am now currently reinstalling Ubuntu 12.4. I tried the above mentioned link - thank you very much!- just to see on the previously failed driver attempt by following the purge commands. And to no avail when typing fglrxinfo I still get the badrequest thing. I will update again after a try with a true fresh install. Thanks again!! UPDATE: Alright everyone. Still no go. I have done everything word per word in the provided tutorial. I have rebooted my computer again to a fucked up resolution and this is what I get when typing fglrxinfo: $ fglrxinfo X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 12 Current serial number in output stream: 12 I would like to add that when installing this file: fglrx_8.970-0ubuntu1_amd64.deb I got this: Building initial module for 3.8.0-29-generic Error! Bad return status for module build on kernel: 3.8.0-29-generic (x86_64) Consult /var/lib/dkms/fglrx/8.970/build/make.log for more information. update-initramfs: deferring update (trigger activated) Processing triggers for ureadahead ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-29-generic Processing triggers for libc-bin ... ldconfig deferred processing now taking place Any ideas? Anyone? I cant for the life of me figure out what I am doing wrong.

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • Access to Salesforce.com Data Through Tableau Desktop

    - by dataintegration
    This article will explain how to connect to any of the RSSBus OData Connectors with Tableau's business intelligence tool. While the example uses the Salesforce Connector, the same process can be followed for any of the OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and Tableau Desktop from Tableau. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce.com account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open Tableau and select the Connect to data option at the top left of the window. Step 4: Here you will click on the option labeled OData under the section labeled On a server. Step 5: A new pop up will appear. The box under Step 1 of the pop-up must contain the OData URL of the Salesforce Connector table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the OData entry URL, you will need to append the table name that you want Tableau to connect with to the OData entry URL. In this example, we will connect to the Account table. Thus, the URL we enter will be: http://localhost:8181/sfconnector/data/conn/odata.rsc/Account. You will also need to add authentication options in this step. To do this, select the Use a Username and Password option in Step 2 of the pop-up and enter the Username and Password of the user who has access to the Salesforce Connector. When you are done, click the Connect button in Step 3 of the pop-up. Step 6: When the connection to the Salesforce Connector is successful, give the connection a name and click the OK button. Step 7: The table columns will be listed on the left side under the Dimensions section of the workspace. Step 8: To view your Salesforce.com data, you can right click under the table name in the Data section at the top left of the dashboard and select the View Data option. Your Saleforce.com data will appear in Tableau.

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • Hierachies....from the Top Down

    - by Joe G
    I've been struggling with how to write on the topic of the importance of hierarchy design.  It's not so much that hierarchies haven't always been important, it's more of that with Fusion, the timing of when the hierarchies are designed should take a higher priority.    I will attempt to explain..... When I was implementing applications, back in the day, we had the list of detailed account values to enter with the obvious parent accounts. Then, after the setup was complete and things were functioning, the reporting phase started.  Users explained the elements that they want on the reports, what totals should be included, and how things should be compared.  Frequently, there was at least one calculation that became a nightmare either because it was based on very specific things that didn't relate to anything else or because it was "hardcoded" so that when something changed, someone need to "fix" the report. With Fusion, the process changes slightly.  You still want to enter all of the detailed accounts, but before you start adding parent values, you should investigate the reporting requirements from the top-down.  It's better to build hierarchies based on the reporting requirements than it is to build reports based on random hierarchies. Build reports based on hierarchies that resemble the reports themselves, and maintain the hierarchies without rework of the reports. For example, if you look at an income statement, you may have line items for Material Costs, Employee Costs, Travel & Entertainment, and Total Operating Expenses.  In your hierarchy, you have detail values that roll up to Material Costs, Employee Costs, and Travel & Entertainment which roll up to Total Operating Expenses. Balances are stored automatically in the cube for each of these.  When you define the report, you pick each of these members - no calculations required.  If a new detail value is added, you simply add it to the hierarchy, and there is no need to modify the report. I realize that there are always exceptions that require special handling, but I am confident that you will end up with much fewer exceptions if you make reporting a priority and design your hierarchies from the top-down.

    Read the article

  • LibGDX - Textures rendering at wrong position

    - by ACluelessGuy
    Update 2: Let me further explain my problem since I think that i didn't make it clear enough: The Y-coordinates on the bottom of my screen should be 0. Instead it is the height of my screen. That means the "higher" i touch/click the screen the less my y-coordinate gets. Above that the origin is not inside my screen, atleast not the 0 y-coordinate. Original post: I'm currently developing a tower defence game for fun by using LibGDX. There are places on my map where the player is or is not allowed to put towers on. So I created different ArrayLists holding rectangles representing a tile on my map. (towerPositions) for(int i = 0; i < map.getLayers().getCount(); i++) { curLay = (TiledMapTileLayer) map.getLayers().get(i); //For all Cells of current Layer for(int k = 0; k < curLay.getWidth(); k++) { for(int j = 0; j < curLay.getHeight(); j++) { curCell = curLay.getCell(k, j); //If there is a actual cell if(curCell != null) { tileWidth = curLay.getTileWidth(); tileHeight = curLay.getTileHeight(); xTileKoord = tileWidth*k; yTileKoord = tileHeight*j; switch(curLay.getName()) { //If layer named "TowersAllowed" picked case "TowersAllowed": towerPositions.add(new Rectangle(xTileKoord, yTileKoord, tileWidth, tileHeight)); // ... AND SO ON If the player clicks on a "allowed" field later on he has the opportunity to build a tower of his coice via a menu. Now here is the problem: The towers render, but they render at wrong position. (They appear really random on the map, no certain pattern for me) for(Rectangle curRect : towerPositions) { if(curRect.contains(xCoord, yCoord)) { //Using a certain tower in this example (left the menu out if(gameControl.createTower("towerXY")) { //RenderObject is just a class holding the Texture and x/y coordinates renderList.add(new RenderObject(new Texture(Gdx.files.internal("TowerXY.png")), curRect.x, curRect.y)); } } } Later on i render it: game.batch.begin(); for(int i = 0; i < renderList.size() ; i++) { game.batch.draw(renderList.get(i).myTexture, renderList.get(i).x, renderList.get(i).y); } game.batch.end(); regards

    Read the article

  • Insert Or update (aka Replace or Upsert)

    - by Davide Mauri
    The topic is really not new but since it’s the second time in few days that I had to explain it different customers, I think it’s worth to make a post out of it. Many times developers would like to insert a new row in a table or, if the row already exists, update it with new data. MySQL has a specific statement for this action, called REPLACE: http://dev.mysql.com/doc/refman/5.0/en/replace.html or the INSERT …. ON DUPLICATE KEY UPDATE option: http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html With SQL Server you can do the very same using a more standard way, using the MERGE statement, with the support of Row Constructors. Let’s say that you have this table: CREATE TABLE dbo.MyTargetTable (     id INT NOT NULL PRIMARY KEY IDENTITY,     alternate_key VARCHAR(50) UNIQUE,     col_1 INT,     col_2 INT,     col_3 INT,     col_4 INT,     col_5 INT ) GO INSERT [dbo].[MyTargetTable] VALUES ('GUQNH', 10, 100, 1000, 10000, 100000), ('UJAHL', 20, 200, 2000, 20000, 200000), ('YKXVW', 30, 300, 3000, 30000, 300000), ('SXMOJ', 40, 400, 4000, 40000, 400000), ('JTPGM', 50, 500, 5000, 50000, 500000), ('ZITKS', 60, 600, 6000, 60000, 600000), ('GGEYD', 70, 700, 7000, 70000, 700000), ('UFXMS', 80, 800, 8000, 80000, 800000), ('BNGGP', 90, 900, 9000, 90000, 900000), ('AMUKO', 100, 1000, 10000, 100000, 1000000) GO If you want to insert or update a row, you can just do that: MERGE INTO     dbo.MyTargetTable T USING     (SELECT * FROM (VALUES ('ZITKS', 61, 601, 6001, 60001, 600001)) Dummy(alternate_key, col_1, col_2, col_3, col_4, col_5)) S ON     T.alternate_key = S.alternate_key WHEN     NOT MATCHED THEN     INSERT VALUES (alternate_key, col_1, col_2, col_3, col_4, col_5) WHEN     MATCHED AND T.col_1 != S.col_1 THEN     UPDATE SET         T.col_1 = S.col_1,         T.col_2 = S.col_2,         T.col_3 = S.col_3,         T.col_4 = S.col_4,         T.col_5 = S.col_5 ; If you want to insert/update more than one row at once, you can super-charge the idea using Table-Value Parameters, that you can just send from your .NET application. Easy, powerful and effective

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >