Search Results

Search found 30360 results on 1215 pages for 'end of life'.

Page 511/1215 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • How do I resolve "No JSON object could be decoded" on mythbuntu live CD?

    - by Neil
    I have been running a MythTV frontend on my laptop for some time against a MythTV backend installed in Linux Mint 12 on another computer, and everything works fine. Now, I'm trying out the Mythbuntu Live CD (12.04.1 32-bit) on the laptop, to turn it into a dedicated front end. It's connecting to the network just fine, and I can see my server. When I click on the frontend icon on the desktop, it asks me for the security code, which I've verified against mythtv-setup on the server. However, when I test that code, it shows the error message "No JSON object could be decoded". I've looked in the control center to see if there's something else I should be setting up. The message above implies to me that it can't find the server, but I can find no place in the control center to tell it where to find my myth backend, which I find a little odd. Does the live CD not work against a backend server on another machine?

    Read the article

  • How to write good code with new stuff?

    - by Reza M.
    I always try to write easily readable code that is well structured. I face a particular problem when I am messing around with something new. I keep changing the code, structure and so many other things. In the end, I look at the code and am annoyed at how complicated it became when I was trying to do something so simple. Once I've completed something, I refactor it heavily so that it's cleaner. This occurs after completion most of the time and it is annoying because the bigger the code the more annoying it is the rewrite it. I am curious to know how people deal with such agony, especially on big projects shared between many people ?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • What's Your Supply Chain+Manufacturing Strategy for Success

    - by [email protected]
    Forward thinking enterprises look to eliminate their dependence on legacy applications that manage information in batch - replacing them with real-time integrated/modern information managment. With rapid manufacturing and global supply chains much more complex today, with the pace of chance ever increasing, leading organizations need better ways to orchestrate their supply chain synchronization with their partner and customer base. EM magazine Mar/Apr'10 edition, covers this topic in an article "Strategising for Success" pgs 26-27, and discusses the available options to organizations as they drive improvements in the levels of collaboration with their partners, suppliers, shippers, distributors and ultimately their end-users, the customer! I'll past the link to the article here as soon as i validate/confirm it.

    Read the article

  • Simple iOS glDrawElements - BAD_ACCESS

    - by user699215
    You can copy paste this into the default OpenGl template created in Xcode. Why am I not seeing anything :-) It is strange as the glDrawArrays(GL_TRIANGLES, 0, 3); is working fine, but with glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); Is giving BAD_ACCESS? Copy paste this into Xcode default OpenGl template: ViewController #import "ViewController.h" #define BUFFER_OFFSET(i) ((char *)NULL + (i)) // Uniform index. enum { UNIFORM_MODELVIEWPROJECTION_MATRIX, UNIFORM_NORMAL_MATRIX, NUM_UNIFORMS }; GLint uniforms[NUM_UNIFORMS]; // Attribute index. enum { ATTRIB_VERTEX, ATTRIB_NORMAL, NUM_ATTRIBUTES }; @interface ViewController () { GLKMatrix4 _modelViewProjectionMatrix; GLKMatrix3 _normalMatrix; float _rotation; GLuint _vertexArray; GLuint _vertexBuffer; NSArray* arrayOfVertex; } @property (strong, nonatomic) EAGLContext *context; @property (strong, nonatomic) GLKBaseEffect *effect; - (void)setupGL; - (void)tearDownGL; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; GLKView *view = (GLKView *)self.view; view.context = self.context; view.drawableDepthFormat = GLKViewDrawableDepthFormat24; [self setupGL]; } - (void)dealloc { [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; if ([self isViewLoaded] && ([[self view] window] == nil)) { self.view = nil; [self tearDownGL]; if ([EAGLContext currentContext] == self.context) { [EAGLContext setCurrentContext:nil]; } self.context = nil; } // Dispose of any resources that can be recreated. } GLuint vertexBufferID; GLuint indexBufferID; static const GLfloat vertices[9] = { -0.5, -0.5, 0.5, 0.5, -0.5, 0.5, -0.5, 0.5, 0.5 }; static const GLubyte indices[3] = { 0, 1, 2 }; - (void)setupGL { [EAGLContext setCurrentContext:self.context]; // [self loadShaders]; self.effect = [[GLKBaseEffect alloc] init]; self.effect.light0.enabled = GL_TRUE; self.effect.light0.diffuseColor = GLKVector4Make(1.0f, 0.4f, 0.4f, 1.0f); glEnable(GL_DEPTH_TEST); // glGenVertexArraysOES(1, &_vertexArray); // glBindVertexArrayOES(_vertexArray); glGenBuffers(1, &vertexBufferID); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glGenBuffers(1, &indexBufferID); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, // Specifies the index of the generic vertex attribute to be modified. 3, // Specifies the number of components per generic vertex attribute. Must be 1, 2, 3, 4. GL_FLOAT, // GL_FALSE, // 0, // BUFFER_OFFSET(0)); // // glBindVertexArrayOES(0); } - (void)tearDownGL { [EAGLContext setCurrentContext:self.context]; glDeleteBuffers(1, &_vertexBuffer); glDeleteVertexArraysOES(1, &_vertexArray); self.effect = nil; } #pragma mark - GLKView and GLKViewController delegate methods - (void)update { float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height); GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f); self.effect.transform.projectionMatrix = projectionMatrix; GLKMatrix4 baseModelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); baseModelViewMatrix = GLKMatrix4Rotate(baseModelViewMatrix, _rotation, 0.0f, 1.0f, 0.0f); // Compute the model view matrix for the object rendered with GLKit GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); self.effect.transform.modelviewMatrix = modelViewMatrix; // Compute the model view matrix for the object rendered with ES2 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 1.5f); modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, _rotation, 1.0f, 1.0f, 1.0f); modelViewMatrix = GLKMatrix4Multiply(baseModelViewMatrix, modelViewMatrix); _normalMatrix = GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL); _modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix); _rotation += self.timeSinceLastUpdate * 0.5f; } int i; - (void)glkView:(GLKView *)view drawInRect:(CGRect)rect { glClearColor(0.65f, 0.65f, 0.65f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // glBindVertexArrayOES(_vertexArray); // Render the object with GLKit [self.effect prepareToDraw]; //glDrawArrays(GL_TRIANGLES, 0, 3); // Render the object again with ES2 // glDrawArrays(GL_TRIANGLES, 0, 3); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } @end

    Read the article

  • Why do business analysts and project managers get higher salaries than programmers? [closed]

    - by jpartogi
    We have to admit that programming is much more difficult than creating documentation or even creating Gantt chart and asking progress to programmers. So for us that are naives, knowing that programming is generally more difficult, why do business analysts and project managers get higher salary than programmers? What is it that makes their job a high paying job when even at most times programmers are the ones that go home late? UPDATE Excuse my ignorance, from some of the response it seems that the reason why BAs and PMs gets higher salary because they are the ones that usually responsible for the mess programmers make. But at the end of the day, it is programmers that get their hands dirty to fix the mess and work harder. So it still does not make sense.

    Read the article

  • How should programmers handle email-username identity theft?

    - by Craige
    Background I recently signed up for an iTunes account, and found that somebody had fraudulently used MY email to register their iTunes account. Why Apple did not validate the email address, I will never know. Now I am told that I cannot use my email address to register a new iTunes account, as this email address is linked to an existing account. This got be thinking... Question How should we as developers handle email/identity theft? Obviously, we should verify that an email address belongs to the person it is said to belong to. Why Apple did not do this in my case, I have no idea. But lets pretend we use email address for login/account identification, and something slipped though the cracks (be it our end, or the users). How should we handle reports of fraudulent accounts?

    Read the article

  • "sudo cd ..." one-liner?

    - by j-g-faustus
    Occasionally I want to cd into a directory where my user does not have permission, so I resort to sudo. The obvious command sudo cd somedir doesn't work: $ sudo mkdir test $ sudo chmod go-rxw test $ ls -l drwx------ 2 root root [...snip...] test $ cd test -bash: cd: test: Permission denied $ sudo cd test sudo: cd: command not found Using sudo su works: $ sudo su # cd test Is it possible to make this into a one-liner? (Not a big deal, just idle curiosity :) The variations I tried didn't work: $ sudo "cd test" sudo: cd: command not found $ sudo -i cd test -bash: line 0: cd: test: No such file or directory $ sudo -s cd test The last one doesn't give an error, but it cd's within a new shell that exits by the end of the line, so it doesn't actually take me anywhere. Can someone enlighten me as to why this happens? Why is sudo cd not found, when for example sudo ls ... works fine?

    Read the article

  • What version of Java should I target for applets?

    - by Christopher Horenstein
    I recently deployed an applet that seems to require Java 6 Update 24. I assume the reason for this requirement is the matching JDK version I used to create the applet (I am new to Java). The fact that my applet requires a Java download/update for users who already have some version of Java installed is a big concern for me; the applets I'm creating slip into a web comic, so it's very disruptive. Having used the most recent version of Java, it seems as though I am able to assume that most of the readers I get will have to update Java to continue reading/playing. Is there a best practice concerning which version of Java to use to make the process of using an applet easy for end-users? Any reading material on this would be very helpful. Should I be using an older version of Java if I don't require new features? I am using Slick for 2D games.

    Read the article

  • Programatically clicking a HTML button by vb.net [closed]

    - by Chauhdry King
    I have to click a HTML button programatically which is on the 3rd page of the website. The button is without id. It has just name type and value. The HTML code of the button is given below <FORM NAME='form1' METHOD='post' action='/dflogin.php'> <INPUT TYPE='hidden' NAME='txtId' value='E712050-15'><INPUT TYPE='hidden' NAME='txtassId' value='1'><INPUT TYPE='hidden' NAME='txtPsw' value='HH29'><INPUT TYPE='hidden' NAME='txtLog' value='0'><h6 align='right'><INPUT TYPE='SUBMIT' NAME='btnSub' value='Next' style='background-color:#009900; color:#fff;'></h6></FORM> I am using the following code to click it Dim i As Integer Dim allButtons As HtmlElementCollection allButtons = WebBrowser1.Document.GetElementsByTagName("input") i = 0 For Each webpageelement As HtmlElement In allButtons i += 1 If i = 5 Then webpageelement.InvokeMember("click") End If Next But I am not able to click it. I am using the vb.net 2008 platform. Can anyone tell me the solution to click it?

    Read the article

  • Error opening .wmv file

    - by Dcm1405
    I tried to open a .wmv file in Thunderbird. I was asked to download files in order to have it accessible. At the end of the installation An error occurred - Location not found. error message displayed with just the OK button available. Now when I press the Play button just the error message displays so I am not sure which packages need to be installed/uninstalled. I have installed Ubuntu 12.04 (32-bit) with open-jdk-7. Error was generated when installing: GStreamer ffmpeg video plugin It seemed that it took a while to have the process completing all steps of the download / install before the error message displayed.

    Read the article

  • Why Ultra-Low Power Computing Will Change Everything

    - by Tori Wieldt
    The ARM TechCon keynote "Why Ultra-Low Power Computing Will Change Everything" was anything but low-powered. The speaker, Dr. Johnathan Koomey, knows his subject: he is a Consulting Professor at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University, Yale University, and UC Berkeley's Energy and Resources Group. His current focus is creating a standard (computations per kilowatt hour) and measuring computer energy consumption over time. The trends are impressive: energy consumption has halved every 1.5 years for the last 60 years. Battery life has made roughly a 10x improvement each decade since 1960. It's these improvements that have made laptops and cell phones possible. What does the future hold? Dr. Koomey said that in the past, the race by chip manufacturers was to create the fastest computer, but the priorities have now changed. New computers are tiny, smart, connected and cheap. "You can't underestimate the importance of a shift in industry focus from raw performance to power efficiency for mobile devices," he said. There is also a confluence of trends in computing, communications, sensors, and controls. The challenge is how to reduce the power requirements for these tiny devices. Alternate sources of power that are being explored are light, heat, motion, and even blood sugar. The University of Michigan has produced a miniature sensor that harnesses solar energy and could last for years without needing to be replaced. Also, the University of Washington has created a sensor that scavenges power from existing radio and TV signals.Specific devices designed for a purpose are much more efficient than general purpose computers. With all these sensors, instead of big data, developers should focus on nano-data, personalized information that will adjust the lights in a room, a machine, a variable sign, etc.Dr. Koomey showed some examples:The Proteus Digital Health Feedback System, an ingestible sensor that transmits when a patient has taken their medicine and is powered by their stomach juices. (Gives "powered by you" a whole new meaning!) Streetline Parking Systems, that provide real-time data about available parking spaces. The information can be sent to your phone or update parking signs around the city to point to areas with available spaces. Less driving around looking for parking spaces!The BigBelly trash system that uses solar power, compacts trash, and sends a text message when it is full. This dramatically reduces the number of times a truck has to come to pick up trash, freeing up resources and slashing fuel costs. This is a classic example of the efficiency of moving "bits not atoms." But researchers are approaching the physical limits of sensors, Dr. Kommey explained. With the current rate of technology improvement, they'll reach the three-atom transistor by 2041. Once they hit that wall, it will force a revolution they way we do computing. But wait, researchers at Purdue University and the University of New South Wales are both working on a reliable one-atom transistors! Other researchers are working on "approximate computing" that will reduce computing requirements drastically. So it's unclear where the wall actually is. In the meantime, as Dr. Koomey promised, ultra-low power computing will change everything.

    Read the article

  • Screencast several application windows at once in Microsoft Windows

    - by Birt
    I have several (20+) applications running on a Microsoft Windows PC. What I would like is a solution that allows me to broadcast the window of each application in a webpage, in readonly mode (there's no need for the users to interact with it). This should work even if the application is in the background, seeing that there's no way to fit all of them on the screen. I performed very extensive searching, from simple screencasting apps such as Camtasia, CamStudio or VHScrCap to things like VNC (haven't found any server able to broadcast multiple windows at once, much less background windows) and even application virtualization, but in the end I haven't found anything that fits my needs. Most solutions that allow capturing a window instead of the whole desktop will not let you capture multiple windows but only a single window and on top of that they don't even work when the window is in the background.

    Read the article

  • Slick2D: Animation not being parsed from spritesheet correctly

    - by user2066880
    I have a 960x960 spritesheet with each tile being 192x192. I initialized my spritesheet and animation like so: spritesheet = new SpriteSheet("resources/spritesheets/player.png", 192, 192); walkingLeft = new Animation(spritesheet, 3, 0, 0, 1, true, 20, true); When I attempt to render the animation, I get a java.lang.ArrayIndexOutOfBoundsException: -1 error. This error doesn't occur when I'm creating an animation from images in the same row. Therefore, I'm assuming that the error is being caused because of the way Slick is handling horizontal scanning (going to the next row after reaching the end).

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • Doing a P2V in OVM 3.0.3

    - by Steen Schmidt
    The other day I was talking to a customer about how you can do a P2V in OVM. I had already written about this topic earlier in my Blog and there was also some good documentation on the topic on how you do this. But what about seing the whole process from start to end, so I have include a link to a demo on the topic. Here is demo that has been divide into three steps: Step 1. Taget System,   Step 2. Import into OVM, and    Step 3. Use the new Template.

    Read the article

  • OpenWorld 2011 Call for Papers: Deadline March 27

    - by antonio romero
    OpenWorld 2011 is now open for the public to submit session proposals. We would like to encourage our customers, and partners to participate in this ‘call for papers” (CFP) process. CFP for the general public, non-Oracle employee submitters, closes on March 27, 2011. Please share the information provided below with your contacts. General Information Conference Location: Moscone Convention Center, San Francisco, CA. Conference Date: Sunday - Thursday, October 2 - 6, 2011 Conference Website: http://www.oracle.com/us/openworld CFP Website: https://oracleus.wingateweb.com/portal/cfp/ Paper submission key dates: Deliverables Due Dates Call for Papers Begins Wednesday, March 9 Call for Papers Ends Sunday, March 27 – 11:59 pm PDT Notifications for Accepted and Declined Submissions Sent End of May Questions regarding the Call for Papers, send an email to [email protected]

    Read the article

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

  • Including Specific Characters with Google Web Fonts

    - by S.K.
    I'm using the Open Sans web font from Google Web Fonts on my website. I only need the basic latin subset, but I do use the Psi (?) character quite often as well and I would like to use the Open Sans version of that character, without having to include the entire greek subset. I looked at this help page which shows how to embed specific characters only using the text parameter, but there's no mention of including specific characters. I tried doing the following to try to combine both font requests into one, but it didn't end up working. <link href='http://fonts.googleapis.com/css?family=Open+Sans:400,400italic,700&subset=latin' rel='stylesheet' type='text/css'> <link href='http://fonts.googleapis.com/css?family=Open+Sans:400,400italic,700&text=%CE%A8' rel='stylesheet' type='text/css'> Is there anyway to accomplish this?

    Read the article

  • Feedback on AZGroups Feedback Form

    Next Monday is the 7th Annual day of .net with Scott Guthrie. I’ve been doing this event for many years and this year (for the first time) I’ll be asking for a feedback / eval form to be turned in at the end of the day. In true community fashion, I would like to ask your feedback on my feedback form. Please be constructive. I know that filling out forms seems silly on a one-by-one basis, but collectively the stats can really help, and I will personally read and evaluate each form, so...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Eight Geektacular Christmas Projects for Your Day Off

    - by Jason Fitzpatrick
    It’s Christmas Eve and if you’re lucky you’ve got some time off ahead of you. Let’s put that time to good use with some holiday-centered geeking out. Come on in for LEGO ornaments, Darth Vader snow flakes, and Christmas light hacks galore. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper Natural Wood Grain Icons for Your Desktop and App Launcher Docks My Blackberry Is Not Working! The Apple Too?! [Funny Video] Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster

    Read the article

  • Nginx and Google Appengine Reverse Proxy Security

    - by jmq
    The scenario is that I have a Google compute node running Nginx as a reverse proxy to the google appengine. The appengine is used to service REST calls from an single page application (SPA). HTTPS is used to the Nginx front end from the Internet. Do I also need to make the traffic from the Nginx reverse proxy to the appengine secure by turning on HTTPS on the appengine? I would like to avoid the overhead of HTTPS between the proxy and the backend. My thinking was that once the traffic has arrived at Nginx encrypted, decrypted in Nginx, and then sent via the reverse proxy inside of Google's infrastructure it would be secure. Is it safe in this case to not use HTTPS?

    Read the article

  • How to Save Hundreds or Thousands of Dollars on Cell Phone Service

    - by Chris Hoffman
    Cell phone contracts are bad. You get a seemingly cheap phone up front, but you more than pay for the cost of the phone over two years. Prepaid phone plans are surging in North America for a reason. Prepaid phone plans will be cheaper and more flexible than traditional contracts with big carriers for many people. However much you use your phone, there’s a good chance you can save money with a prepaid service. No More Contracts Here’s how cell phone service typically works in North America: You get a subsidized phone for “free”, $99, or $199. You sign up for a two-year contract and more than pay back the cost of that phone over the length of the contract. This is similar to leasing something or purchasing it on a credit card and paying it back over two years — you spend less up front, but you’re paying more in the long run. But this isn’t the only option. You could opt for a cheaper prepaid service that doesn’t lock you into a contract. If you don’t use your phone much, you could just pay for what you use and avoid the hefty cell phone bills. If you use your phone a lot, you could get a cheaper plan, too. Now, this certainly isn’t for everyone. If you want the latest iPhone or Galaxy smartphone every two years and require a 4G data connection, prepaid services may not be for you. On the other hand, if you don’t need the latest phone, you can save money here. You can also save a huge amount of money if you don’t use your phone much. Phone Options When you choose your prepaid or contract-free service, you’ll often be able to purchase a phone from them. You’ll generally be able to find dirt-cheap dumbphones and the cheapest, slowest Android phones for not very much money. If you are able to buy a top-of-the-line smartphone, you’ll have to pay the full, unsubsidized price. That’s $649 for either an iPhone 5S or Samsung Galaxy S4. Whatever phones the service provider offers, you could always buy a phone elsewhere — for example, you could buy an unsubsidized iPhone direct from Apple and then take it to your cell phone service of choice. Most services will allow you to get a SIM card and pop it into your existing phone rather than purchasing a phone. If you can get a hand-me-down smartphone, you can often save quite a bit of money. For example, you may have a family member upgrading from an iPhone 4S to an iPhone 5S. You could take their phone to a prepaid carrier and have a nicer phone on a cheap cell phone plan. If you brought an old smartphone to a big carrier like AT&T or Verizon, they wouldn’t give you a discount on your monthly plan. You’d have to pay the same amount of money every month as if you had gotten a subsidized phone. Google’s Nexus phones are also great options for people looking to buy smartphones and pay up-front. Google’s Nexus 4 offered a modern, almost top-of-the-line Android smartphone experience at $299 or $349 when it came out last year. Google will soon be releasing the Nexus 5 and it’s expected to be priced at $349. That’s certainly a lot more than a cheap phone, but it’s a fairly high-end smartphone at almost half the price of an iPhone 5S or Galaxy S4. Nexus phones can be purchased online from Google’s Play Store. Service Options When choosing a service, you need to consider what you actually use. If you’re someone who only uses your phone rarely, you can get plans that will allow you to pay as little as a few dollars per month. If you’re someone who’s usually in range of Wi-Fi, you may not need much data at all. If you want a plan with unlimited talk, texting, and data usage, you can get it for much cheaper than you’d pay on a major carrier like AT&T. The options here range from pay-as-you-go plans, like the ones offered by T-Mobile, which allow you to put a certain amount of money in and only drain that balance when you actually use minutes, texts, or data. If you only make a few calls and send a few texts per month, you’d only pay a few bucks. On the other end, Walmart’s Straight Talk service is a popular option that offers unlimited talk, texting, and data at $45 per month. Which service is right for you depends on a lot of things, including your usage and what each network’s coverage is like in your area. You’ll want to do some research of your own before choosing a service. Prepaid services also offer you even more flexibility after you choose one. If you’re not happy or a better deal comes along, you can switch — you’re not locked into your service for two years and you won’t pay an early termination fee. Image Credit: Intel Free Press on Flickr, Jon Fingas on Flickr, John Karakatsanis on Flickr, kendalkinggroup on Flickr     

    Read the article

  • AWS EC2 Oracle RDB connection to Oracle Database Instance

    - by llaszews
    Provisioning my Oracle database instance to AWS EC2 RDB was easy. Just a few clicks! However, getting my connection to my Oracle cloud database was not as easy. A couple things that are not obvious (using Oracle SQL Developer): 1. Need to set up a database security group. 2. Need to use end point for the host name. This video is the best one on the internet to explain both points: http://www.youtube.com/watch?v=ocFURuX0eEw

    Read the article

  • How to list missing partitions?

    - by celebrimbor
    I have installed Ubuntu on one of my partition and Crunchbang on the other partition. As I wanted to make some continuous space, I moved Crunchbang partition and then checked fdisk output which looks like this Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc7996dfa Device Boot Start End Blocks Id System /dev/sda1 * 63 80324 40131 de Dell Utility /dev/sda4 81918 625139711 312528897 f W95 Ext'd (LBA) /dev/sda5 81920 211816447 105867264 83 Linux /dev/sda6 299100160 341043199 20971520 83 Linux /dev/sda7 341045248 625139711 142047232 7 HPFS/NTFS/exFAT I cannot see sda2 and sda3 partition. How to find them?

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >