Search Results

Search found 6515 results on 261 pages for 'half life'.

Page 107/261 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • How to Skip the Start Screen and Boot to the Desktop in Windows 8.1

    - by Mark Wilson
    For almost everyone who made the upgrade, Windows 8 proved to be something of a disappointment for one reason or another. Windows 8.1 (or Windows Blue) was released to address many of the issues users had complained about including reintroducing the ability to boot straight to the desktop. Being able to boot to the desktop rather than the Start screen is something that people have been clammering for ever since the first preview versions of Windows 8 were unveiled. There have been various third-party tools released as numerous workarounds used to get around the problem, but now it is an option that is built directly into the operating system. You’ll need to have downloaded and installed the update in order to proceed, but once you have done this, things are very simple. When you have Windows up and running after the upgrade, right click an empty section of the taskbar and select properties to bring up the newly named “Taskbar and Navigation properties” dialog.  Move to the Navigation tab and look in the “Start screen” section in the lower half of the dialog. Check the box labelled ‘Go to the desktop instead of Start when I sign in” and click OK.    

    Read the article

  • Cross-platform desktop programming: C++ vs. Python

    - by John Wells
    Alright, to start off, I have experience as an amateur Obj-C/Cocoa and Ruby w/Rails programmer. These are great, but they aren't really helpful for writing cross-platform applications (hopefully GNUStep will one day be complete enough for the first to be multi platform, but that day is not today). C++, from what I can gather, is extremely powerful but also a huge, ugly behemoth that can take half a decade or more to master. I've also read that you can very easily not only shoot yourself in the foot, but blow your entire leg off with it since memory management is all manual. Obviously, this is all quite intimidating. Is it correct? Python seems to provide most of the power of C++ and is much easier to pick up at the cost of speed. How big is this sacrifice? Is it meaningful or can it be ignored? Which will have me writing fast, stable, highly reliable applications in a reasonable amount of time? Also, is it better to use Qt for your UI or instead maintain separate, native front ends for each platform? EDIT: For extra clarity, there are two types applications I want to write: one is an extremely friendly and convenient database frontend and the other, which no doubt will come much later on, is a 3D world editor.

    Read the article

  • Order of operations to render VBO to FBO texture and then rendering FBO texture full quad

    - by cyberdemon
    I've just started using OpenGL with C# via the OpenTK library. I've managed to successfully render my game world using VBOs. I now want to create a pixellated affect by rendering the frame to an offscreen FBO with a size half of my GameWindow size and then render that FBO to a full screen quad. I've been looking at the OpenTK example here: http://www.opentk.com/doc/graphics/frame-buffer-objects ...but the result is a black form. I'm not sure which parts of the example code belongs in the OnLoad event and OnRenderFrame. Can someone please tell me if the below code shows the correct order of operations? OnLoad { // VBO. // DataArrayBuffer GenBuffers/BindBuffer/BufferData // ElementArrayBuffer GenBuffers/BindBuffer/BufferData // ColourArrayBuffer GenBuffers/BindBuffer/BufferData // FBO. // ColourTexture GenTextures/BindTexture/TexParameterx4/TexImage2D // Create FBO. // Textures Ext.GenFramebuffers/Ext.BindFramebuffer/Ext.FramebufferTexture2D/Ext.FramebufferRenderbuffer } OnRenderFrame { // Use FBO buffer. Ext.BindFramebuffer(FBO) GL.Clear // Set viewport to FBO dimensions. GL.DrawBuffer((DrawBufferMode)FramebufferAttachment.ColorAttachment0Ext) // Bind VBO arrays. GL.BindBuffer(ColourArrayBuffer) GL.ColorPointer GL.EnableClientState(ColorArray) GL.BindBuffer(DataArrayBuffer) // If world changed GL.BufferData(DataArrayBuffer) GL.VertexPointer GL.EnableClientState(VertexArray) GL.BindBuffer(ElementArrayBuffer) // Render VBO. GL.DrawElements // Bind visible buffer. GL.Ext.BindFramebuffer(0) GL.DrawBuffer(Back) GL.Clear // Set camera to view texture. GL.BindTexture(ColourTexture) // Render FBO texture GL.Begin(Quads) // Draw texture on quad // TexCoord2/Vertex2 GL.End SwapBuffers }

    Read the article

  • Events and objects being skipped in GameMaker

    - by skeletalmonkey
    Update: Turns out it's not an issue with this code (or at least not entirely). Somehow the objects I use for keylogging and player automation (basic ai that plays the game) are being 'skipped' or not loaded about half the time. These are invisible objects in a room that have basic effects such are simulating button presses, or logging them. I don't know how to better explain this problem without putting up all my code, so unless someone has heard of this issue I guess I'll be banging my head against the desk for a bit /Update I've been continuing work on modifying Spelunky, but I've run into a pretty major issue with GameMaker, which I hope is me just doing something wrong. I have the code below, which is supposed to write log files named sequentially. It's placed in a End Room event such that when a player finishes a level, it'll write all their keypress's to file. The problem is that it randomly skips files, and when it reaches about 30 logs it stops creating any new files. var file_name; file_count = 4; file_name = file_find_first("logs/*.txt", 0); while (file_name != "") { file_count += 1; file_name = file_find_next(); } file_find_close(); file = file_text_open_write("logs/log" + string(file_count) + ".txt"); for(i = 0; i < ds_list_size(keyCodes); i += 1) { file_text_write_string(file, string(ds_list_find_value(keyCodes, i))); file_text_write_string(file, " "); file_text_write_string(file, string(ds_list_find_value(keyTimes, i))); file_text_writeln(file); } file_text_close(file); My best guess is that the first counting loop is taking too long and the whole thing is getting dropped? Also, if anyone can tell me of a better way to have sequentially numbered log files that would also be great. Log files have to continue counting over multiple start/stops of the game.

    Read the article

  • Having a problem with texturing vertices in WebGL, think parameters are off in the image?

    - by mathacka
    I'm having a problem texturing a simple rectangle in my WebGL program, I have the parameters set as follows: gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureImage); I'm using this image: On the properties of this image it says it's 32 bit depth, so that should take care of the gl.UNSIGNED_BYTE, and I've tried both gl.RGBA and gl.RGB to see if it's not reading the transparency. It is a 32x32 pixel image, so it's power of 2. And I've tried almost all the combinations of formats and types, but I'm not sure if this is the answer or not. I'm getting these two errors in the chrome console: INVALID_VALUE: texImage2D: invalid image (index):101 WebGL: drawArrays: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled. the drawArrays function is simply: "gl.drawArrays(gl.TRIANGLES, 0, 6);" using 6 vertices to make a rectangle.

    Read the article

  • Back up a single table in SQL Server

    - by BuckWoody
    SQL Server doesn’t have an easy way to take a table backup, so I often use the bcp (Bulk Copy Program) to accomplish the same goal. I’ve mentioned this before, and someone told me when they tried it they couldn’t restore the table – ah the dangers of telling people half the information! I should have mentioned that you need to have a “format file” ready if the table does not exist at the destination. In my case I already had the table, in this person’s case they did not. The format file can be used to rebuild that table structure before the data is bcp’d in, and you can read more about it here: http://msdn.microsoft.com/en-us/library/ms191516.aspx There’s another way to back up a table, and that’s to create a Filegroup and place the table there. Then you can take a Filegroup backup to back up a single table. Of course, there are other methods of moving a single table’s data in an out, including SQL Server Integration Services and even the older Data Transformation Services, or simply by using hte SQLCMD or PowerShell utilities to run a query and just save the output to a file. In fact, these days I’m using a PowerShell script to build INSERT statements from that query. That could also easily be modified to create the table structure (or modify one if needed) quite easily. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Extreme Optimization Numerical Libraries for .NET – Part 1 of n

    - by JoshReuben
    While many of my colleagues are fascinated in constructing the ultimate ViewModel or ServiceBus, I feel that this kind of plumbing code is re-invented far too many times – at some point in the near future, it will be out of the box standard infra. How many times have you been to a customer site and built a different variation of the same kind of code frameworks? How many times can you abstract Prism or reliable and discoverable WCF communication? As the bar is raised for whats bundled with the framework and more tasks become declarative, automated and configurable, Information Systems will expose a higher level of abstraction, forcing software engineers to focus on more advanced computer science and algorithmic tasks. I've spent the better half of the past decade building skills in .NET and expanding my mathematical horizons by working through the Schaums guides. In this series I am going to examine how these skillsets come together in the implementation provided by ExtremeOptimization. Download the trial version here: http://www.extremeoptimization.com/downloads.aspx Overview The library implements a set of algorithms for: linear algebra, complex numbers, numerical integration and differentiation, solving equations, optimization, random numbers, regression, ANOVA, statistical distributions, hypothesis tests. EONumLib combines three libraries in one - organized in a consistent namespace hierarchy. Mathematics Library - Extreme.Mathematics namespace Vector and Matrix Library - Extreme.Mathematics.LinearAlgebra namespace Statistics Library - Extreme.Statistics namespace System Requirements -.NET framework 4.0  Mathematics Library The classes are organized into the following namespace hierarchy: Extreme.Mathematics – common data types, exception types, and delegates. Extreme.Mathematics.Calculus - numerical integration and differentiation of functions. Extreme.Mathematics.Curves - points, lines and curves, including polynomials and Chebyshev approximations. curve fitting and interpolation. Extreme.Mathematics.Generic - generic arithmetic & linear algebra. Extreme.Mathematics.EquationSolvers - root finding algorithms. Extreme.Mathematics.LinearAlgebra - vectors , matrices , matrix decompositions, solvers for simultaneous linear equations and least squares. Extreme.Mathematics.Optimization – multi-d function optimization + linear programming. Extreme.Mathematics.SignalProcessing - one and two-dimensional discrete Fourier transforms. Extreme.Mathematics.SpecialFunctions

    Read the article

  • Since a few days, Dropbox stopped syncing

    - by Fabio
    Theres a new problem with nautilus-dropbox and is the worst for a service like Dropbox: stopped syncing and their tech support don't believe their users. The problem is: start dropbox it downloads index creates folders start to download files stop downloading files or the speed is like 0.3k/seg even deleting everything doesn't work, even syncing a single folder with a few files, not even configuring a different limit speed or let all your bandwith, nothing works for me and i have found many users with the same problem. it's strange, i have two computers with ubuntu 14.04 and one with windows 7, and in one of the linux machines it works fine, but not in the other one. Both of them are x64. The answers from the tech support starts with "delete everything and reinstall" to "you may be blocking port 443", yeah, sure (and if i do that half of internet dissapears, idiot), but nothing works. Someone has some idea? i cant find any log file (at least a text log file like in /var/log) to understand what is dropbox doing and what is not working. PS: sorry for my "english", is not my language Examples: dropbox slow on ubuntu 14.04 Dropbox does not sync on ubuntu 14.04 Dropbox Status (in spanish but is the same thing) fabio@canopus:~$ dropbox status Sincronizando (Quedan 330 archivos., Faltan 9 días.) Descargando 330 archivos... (0,3 KB/s, Faltan 9 días.) 9 days for 330 files / 100Mb

    Read the article

  • Strange characters appearing on websites - ASCII? - UNICODE?

    - by Mick
    I have created many very simple pure HTML websites over the years. Most of them appear to work fine most of the time. But there is one recurring problem which I have never quite sorted out involving strange characters. The scenario goes like this: I create the site. I look at it in my browser, everything appears fine. I may look at it a great many times over the coming weeks or months as I make additions here and there. Perhaps on a variety of browsers on a variety of PC's. Then one day I look at the page and see a random sprinkling of white question marks against dark diamond shapes. These might appear where I had expected to see hyphens or quotes or apostrophes. My immediate thought is that my browser got into some strange state because I was looking at some foreign website with strange characters, but I'm never quite sure. I'm left with that nagging feeling that perhaps half the planet is seeing my website with funny question marks all over it. So my question is what's going on? What should I do to ensure that as many people as possible around the world can view my text as I originally intended? Should I be using those special html sequences like &pound; for all non alphanumeric characters? Should I worry at all? Edit: Right now I have the problem occurring on this page: http://www.fullreservebanking.com/papers.htm ... part of it looks like this: I am using FireFox 5 and the character encoding currently appears to be "UNICODE (UTF-8)". I do not remember manually setting the character encoding to anything since installation. I do occasionally look at Japanese websites for work related reasons - though when I do so, I do not manually make any changes to firefox settings. Edit: Now fixed. Web page altered accordingly.

    Read the article

  • Plymouth package broken... Can I safely remove it? Other solution to repare it?

    - by Julien Gorenflot
    I have a broken package... So far nothing horrible. The problem is that it is Plymouth, and it seems that if I remove it, I will remove half of the packages of my system... So here is my question: if I actually remove, or even purge plymouth; will I at least have a terminal left after it to reinstall it? Or am I definitely doomed? Just to illustrate what I say; here is the result of an apt-get --reinstall install plymouth: julien@julien-desktop:~$ sudo apt-get --reinstall install plymouth Reading package lists... Done Building dependency tree Reading state information... Done Reinstallation of plymouth is not possible, it cannot be downloaded. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: plymouth : Depends: libdrm-nouveau1 (>= 2.4.11-1ubuntu1~) but it is not installable Recommends: plymouth-themes-all but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). or an apt-get -f install (well basically it is the same) julien@julien-desktop:~$ sudo apt-get -f install [sudo] password for julien: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... failed. The following packages have unmet dependencies: plymouth : Depends: libdrm-nouveau1 (>= 2.4.11-1ubuntu1~) but it is not installable Recommends: plymouth-themes-all but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. E: Unable to correct dependencies julien@julien-desktop:~$ Any idea would be very welcome...

    Read the article

  • AT&T - Customer service hahahahahahahahahahahahahahaha

    - by AreYouSerious
    Okay, I'm a separated 2 time Iraq war veteran, living in Germany Supporting the military. About a year and a hald ago I bought an iPhone 4 off a guy from Craigslist. I thought the phone was unlocked, but when I got to Germany I realized it was not. I called AT&T and they told me that due to the contract with Apple they could not unlock any iPhone period. After the lawsuit with Apple, they started unlocking iPhones. So Today I called up their customer support and asked if they could unlock my phone. They said that they would only do it if I were a previous customer, could provide the information from the person that I had bought it from, or if I bought one at cost.-note, it has a baseband that is not able to be "unlocked" by software.Hello I already own the device, and a year and a half ago I bought it off someone that couldn't afford it... so no, I don't have the at&t account information from him. Just another example of why I won't ever use AT&T again. and I still have this iPhone that I have to jail break, and can't use a a phone. STAY AWAY FROM AT&T. They don't know the meaning of customer service!

    Read the article

  • How can a non-technical person learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.

    Read the article

  • Blending animations for more character movements

    - by Noob Saibot
    I am making a hack n slash 3rd person game. And I want the character movements to be more dynamic not like fighting games where you have a moves list. I want to animate tons of different animations and have them "Tween" between each other? Because I want the controls to not be keyboard mouse. I want it to be all keyboard. that way you have up to 10 inputs (All your fingers) to blend and morph animations to create more fluid movements. In the end this will almost be similar to characters typing a phrase or string of keys rather than move forward mouse look click to melee. My question is. Has anyone done this before and would someone go about trying to tween lets say one for key on the keyboard excluding Tab, Caps, R+Shift, L+Shift, Enter, R+Ctrl, L+Ctrl, L+Alt, R+Alt, Windows Key, and Menu. So thats all the numbers, letters and punctuation keys. Thats 46 keys gives me a combination of 46P1 = 5502622159812088949850305428800254892961651752960000000000L (used Python) and with a minimum entry value of 2 keypresses shortening to half. This is not humanly possible to create so many inique animations in one lifetime. But I'm guessing there is a reason this hasn't been done already. Or if I just used 10 basic keys. Maybe ASDF SPACE (RIGHT HAND) 456+0 (LEFT HAND KEYPAD) it would give me 3,628,800 posible unique animations.

    Read the article

  • Is C and Python enough?

    - by gruszczy
    I am very proficient in Python (including Django), which I use for most tasks. I am also quite confident with C; I am maintaining small file system in userspace written in C. Yet when I am browsing job offers I see everywhere Java/C# and sometimes C++. I have coded profesionally in C++ for half a year in a gaming company, but I don't consider myself a pro. Also I simply despise Java and C#, which I would prefer not to touch ever. But it seems to me, that I am at very unfavorable position, when it comes to career. I am browsing careers.stackoverflow.com and I don't see and pure python or C offers. I would like to find a new job abroad in about 6 months. If I find some python offer, it means doing web development (not my favorite job). Does it mean, that I have to quickly start improving my C++ skills, if I wish to find a satisfying job? What would you suggest me? EDIT Learning new technologies is not an issue. Company I am working in is an integrator. Basically every new project requires learning new technologies, sometimes custom made. During last two years I was writing SQLs by hand, using LDAP, writing GUI in Qt, working on large scale DBMS prototype, making our internal help desk system use gsm modem or writing own report system. In previous job I had to learn from basics everything what I could about games development, because I knew nothing and chose this job only because of the challene it posed. I am all about embracing new technologies. I have used Java in the past and simply didn't like it. It's dull and boring. Doesn't let me do anything cool. I have recently seen some C# in action and seems similar. I don't like it. It's like German. I don't like speaking German.

    Read the article

  • How can a non-technical person can learn to write a spec for small projects?

    - by Joseph Turian
    How can a non-technical person learn to write specs for small projects? A friend of mine is trying to outsource some development on a statistics project. In particular, he does a lot of work in excel, and wants to outsource the creation of scripts to do what he now does by hand. However, my friend is extremely non-technical. He is poor at writing technical specs. When he does write a spec, it is written the way you would describe doing something in excel (go to this cell and then copy the value to that cell). It is also overly verbose, and does examples several times. I'm not sure if he properly describes corner cases. The first project he outsourced was a failure. I think he overdescribed some details, but underdescribed corner cases. That and/or the coder he hired didn't think through the corner cases and ask appropriate questions. I'm not sure. I got on IM with him and it took me half an hour to dig out a description that should have taken five minutes or less to describe. I wrote the scripts for him at the end, but didn't examine why his process with the coder failed. He has asked me for help. However, I refuse to get involved, because taking his spec and translating it into clear requirements is 10x more work than executing on a clearly written spec. What is the right way for him to learn? Are there resources he could use? Are there ways he can learn from small, low-pressure practice projects with coders? [edit: Most of his scripts are statistical and data processing oriented. e.g. take this column and run an average over it. Remove these rows under these conditions. So the challenge is different than spec'ing a web app.]

    Read the article

  • The value of an updated specification

    - by Mr. Jefferson
    I'm at the tail end of a large project (around 5 months of my time, 60,000 lines of code) on which I was the only developer. The specification documents for the project were designed well early on, but as always happens during development, some things have changed. For example: Bugs were discovered and fixed in ways that don't correspond well with the spec Additional needs were discovered and dealt with We found areas where the spec hadn't thought far enough ahead and had to change some implementation Etc. Through all this, the spec was not kept updated, because we were lean on resources (I know that's a questionable excuse). I'm hoping to get the spec updated soon when we have time, but it's a matter of convincing management to spend the effort. I believe it's a good idea, but I need some good reasoning; I can't speak from very much experience because I'm only a year and a half out of school. So here are my questions: What value is there in keeping an updated spec? How often should the spec be updated? With every change made that might affect it, just at the end of development, or something in between? EDIT to clarify: this project was internal, so no external client was involved. It's our product.

    Read the article

  • Are programmers getting lazier and less competent

    - by Skeith
    I started programming in C++ at uni and loved it. In the next term we changed to VB6 and I hated it. I could not tell what was going on, you drag a button to a form and the ide writes the code for you. While I hated the way VB functioned I cannot argue that it was faster and easier than doing the same thing in C++ so i can see why it is a popular language. Now I am not calling VB developers lazy in just saying it easier than C++ and I have noticed that a lot of newer languages are following this trend such a C#. This leads me to think that as more business want quick results more people will program like this and sooner or later there will be no such thing as what we call programming now. Future programmers will tell the computer what they want and the compiler will write the program for them like in star trek. Is this just an under informed opinion of a junior programmer or are programmers getting lazier and less competent in general? EDIT: A lot of answers say why re invent the wheel and I agree with this but when there are wheels available people are not bothering to learn how to make the wheel. I can google how to do pretty much anything in any language and half the languages do so much for you when it come to debugging they have no idea what there code does of how to fix the error. That's how I cam up with the theory that programmers are becoming lazier and less competent as no one cares how stuff works just that it does until it does not.

    Read the article

  • Why isn't my lighting working properly? Are my normals messed up?

    - by Radek Slupik
    I'm relatively new to OpenGL and I am trying to draw a 3D model (loaded from a 3ds file using lib3ds) using OpenGL with lighting, but about half of it is drawn in black. I set up the light as such: glEnable(GL_LIGHTING); glShadeModel(GL_SMOOTH); GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); glEnable(GL_LIGHT0); GLfloat lightColor0[] = {1.0f, 1.0f, 1.0f, 1.0f}; GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 0.0f}; glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0); The model is in a VBO and drawn using glDrawArrays. The normals are in a separate VBO, and the normals are calculated using lib3ds_mesh_calculate_vertex_normals: std::vector<std::array<float, 3>> normals; for (std::size_t i = 0; i < model->nmeshes; ++i) { auto& mesh = *model->meshes[i]; std::vector<float[3]> vertex_normals(mesh.nfaces * 3); lib3ds_mesh_calculate_vertex_normals(&mesh, vertex_normals.data()); for (std::size_t j = 0; j < mesh.nfaces; ++j) { auto& face = mesh.faces[j]; normals.push_back(make_array(vertex_normals[j])); } } glBindBuffer(GL_ARRAY_BUFFER, normal_vbo_); glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(decltype(normals)::value_type), normals.data(), GL_STATIC_DRAW); The problem isn't the vertices; the model is drawn correctly when drawing it as a wireframe. I also fixed the normals in Blender using controlN. What could be the problem? Should I store the normals in a different order?

    Read the article

  • Scientists Demonstrate First-Person Shooter Games Improve Vision

    - by Jason Fitzpatrick
    Need an excuse to log a few more hours playing Call of Duty or Medal of Honor? Scientists demonstrated improved vision in test subjects after daily doses of first-person shooter games. Scientists at McMaster University took subjects who, as the result of surgery correcting congenital cataracts, had less than 20/20 vision. Subjects played Medal of Honor for a total of 40 hours over the course of 4 weeks before having their vision retested. The results? The CBC reports: The participants found improvements in detail, perception of motion and in low contrast settings. In essence, players could now read about one to one-and-a-half more lines on an optometrist’s eye chart. “We were thrilled,” Lewis said. “It’s very exciting to open up a new world of hope for these people.” How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Oracle(R) Buys Pre-Paid Software Assets From eServGlobal

    - by Paulo Folgado
    Oracle to Deliver Scalable Carrier-Grade Pre-Paid Solution Based on Open, Flexible IT-Based Platform News Facts ·        Oracle has agreed to acquire certain pre-paid assets of eServGlobal, a provider of advanced IT-based, pre-paid charging solutions for the communications industry. ·        eServGlobal's Universal Service Platform (USP) includes a pre-paid charging application, a network-services platform and a messaging gateway. The ChargingMax, NumberMax, uVOMS, MessageMax, PromoMax Express and Social Relationship Management software currently supports more than 25 tier-one customers including the world's largest IT-based installation of pre-paid services. ·        The combination of Oracle Communications Billing and Revenue Management and the USP applications is expected to accelerate the shift from network- to IT-based pre-paid systems by providing the first convergent, open IT-based platform from a leading business software and hardware systems company. ·        Customers are expected to benefit from traditional carrier-grade, pre-paid service authorization with IT-grade flexibility that supports any service or network, is easier to deploy and maintain and delivers an overall lower total cost of ownership. ·        The transaction is expected to close in the second half of this year. Supporting Quote ·        "The majority of mobile phone users worldwide use pre-paid plans, and that number is growing exponentially. Oracle Communications applications combined with the pre-paid software assets from eServGlobal will provide our customers with highly available and scalable carrier-grade, pre-paid software on an open, convergent platform. This will enable our customers to deliver traditional pre-paid voice services and easily introduce hybrid pre-paid and post-paid plans with targeted pricing, promotions and service bundles that include voice, data and network services," said Liam Maxwell, vice president of products, Oracle Communications. Supporting Resources About Oracle and eServGlobal USP General Presentation FAQ

    Read the article

  • Math questions at a programmer interview?

    - by anon
    So I went to an interview at Samsung here in Dallas, Texas. The way the recruiter described the job, he didn't make it sound like it was too math-oriented. The job basically involved graphics programming and C++. Yes, math is implied in graphics programming, especially shaders, but I still wasn't expecting this... The whole interview lasted about an hour and a half and they asked me nothing but math-related questions. They didn't ask me a single programming question, which I found odd. About all they did was ask me how to write certain math routines as a C++ function, but that's about it. What about programming philosophy questions? Design patterns? Code-correctness? Constness? Exception safety? Thread safety? There are a zillion topics that they could have covered. But they didn't. The main concern I have is that they didn't ask any programming questions. This basically implies to me that any programmer who is good at math can get a job here, but they might put out terrible code. Of course, I think I bombed the interview because I haven't used any sort of linear algebra in about a year and I forget math easily if I haven't used it in practice for a while. Are any of my other fellow programmers out there this way? I'm a game programmer too, so this seems especially odd. The more I learn, the more old knowledge that gets "popped" out of my "stack" (memory). My question is: Does this interview seem suspicious? Is this a typical interview that large corporations have? During the interview they told me that Google's interview process is similar. They have multiple, consecutive interviews where the math problems get more advanced.

    Read the article

  • Fog shader camera problem

    - by MaT
    I have some difficulties with my vertex-fragment fog shader in Unity. I have a good visual result but the problem is that the gradient is based on the camera's position, it moves as the camera moves. I don't know how to fix it. Here is the shader code. struct v2f { float4 pos : SV_POSITION; float4 grabUV : TEXCOORD0; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; float4 screenPos : TEXCOORD3; }; v2f vert(appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv_depth = v.texcoord.xy; o.grabUV = ComputeGrabScreenPos(o.pos); half index = v.vertex.z; o.screenPos = ComputeScreenPos(o.pos); o.interpolatedRay = mul(UNITY_MATRIX_MV, v.vertex); return o; } sampler2D _GrabTexture; float4 frag(v2f IN) : COLOR { float3 uv = UNITY_PROJ_COORD(IN.grabUV); float dpth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv)); dpth = LinearEyeDepth(dpth); float4 wsPos = (IN.screenPos + dpth * IN.interpolatedRay); // Here is the problem but how to fix it float fogVert = max(0.0, (wsPos.y - _Depth) * (_DepthScale * 0.1f)); fogVert *= fogVert; fogVert = (exp (-fogVert)); return fogVert; } Thanks a lot !

    Read the article

  • Keeping Aspect Screen Ration While Stays in Center

    - by David Dimalanta
    I sqw and I tried this suggestion on PISTACHIO BRAINSTORMIN* on how to make a good and adaptive screen ration. For every different screen size, let's say I put the perfect circle as a Texture in LibGDX and played it on screen. Here's the blueberry image example and it's perfectly rounded: When I played it on the Google Nexus 7, the circle turn into a slightly oblonng shape, resembling as it was being flatten a bit. Please observe this snapshot below and you can see the blueberry is almost but slightly not perfectly rounded: Now, when I tried the suggested code for aspect ratio, the perfect circle retained but another problem is occured. The problem is that I expecting for a view on center but instead it's been moved to the right offset leaving with a half black screen. This would be look like this: Here is my code using the suggested screen aspect ratio code: Class' Field // Ingredients Needed for Screen Aspect Ratio private static final int VIRTUAL_WIDTH = 720; private static final int VIRTUAL_HEIGHT = 1280; private static final float ASPECT_RATIO = ((float) VIRTUAL_WIDTH)/((float) VIRTUAL_HEIGHT); private Camera Mother_Camera; private Rectangle Viewport; render() // Camera updating... Mother_Camera.update(); Mother_Camera.apply(Gdx.gl10); // Reseting viewport... Gdx.gl.glViewport((int) Viewport.x, (int) Viewport.y, (int) Viewport.width, (int) Viewport.height); // Clear previous frame. Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); show() Mother_Camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); Was this code useful for screen aspect ratio-proportion fixing or it is statically dependent on actual device's width and height? *see http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/#comment-317

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • Acceptable sound quality: stereo needed for an Android game?

    - by Thomas Calc
    I have various simple short sound effects (damage sound, dying sound, thunderbolt, fanfare, breaking) for a game that is developed for Android currently. I use OGG files: 96kbps VBR, 44.1KHz, 2 channels (that means stereo, right?). I read the other stackexchange topics about "acceptable sound quality", but they're too general, address too many things. My experience is that even with 80kbps, my effects sound OK. But I tested it on a limited number of Android devices (including a Sony Ericsson Xperia Neo and a HTC Desire HD). My questions: For mobile phones and tablets, generally, what parameters are recommended? Won't my 80kbps sounds be bad on a newer device (such as a modern tablet)? I don't hear any difference between stereo and mono (2 channels vs. 1 channel, right?), is there any noticeable difference at all for mobile phones / tablets? (in terms of the player experience) May it worth it at all? I assume that stereo sounds take much more in memory (when they're decoded to PCM), despite of the fact that the compressed OGG size is practically the same. Reacting to Roy T.'s great comment: Actually, I couldn't measure the PCM size (Android decodes OGG internally), but I thought that stereo will take more space than mono when uncompressed After throwing out one of the WAV channels in Audacity, and re-exporting it: The new WAV file size is half than before The OGG file size is practically the same as before The sound effects and game music was recorded by my friend who is an experienced hobby musician/composer, but he knows little about computers & software so he just gave me some high-quality WAV files generated via his hardware.These were stereo, but if I check them in Audacity, both channels appear to be exactly the same.Can I consider them the same (= moving to mono), or might there be some unnoticeable differences to the human eye?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >