Search Results

Search found 4054 results on 163 pages for 'surround sound'.

Page 112/163 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Do I need social networks to be an expert developer? [closed]

    - by Gerald Blizzy
    This question may sound odd, but do I need twitter, facebook and google+ if I am a web-developer? I see many expert developers nowadays using it in working order. It seems like it's harder to stay in touch with customers, co-workers and potential customers if you don' use social networks. Am I right? Reason why I ask is that I am totally not a facebook/twitter person, I find it boring and annoying. I understand that linkedin is usefull for career, but what about twitter and facebook? Are they needed for web-developer career? What I am trying to ask is if I only use linkedin, own portfolio website, google talks, gmail and something like github, would I actually miss anything professionally/job-wise? My thoughts are that I can just have my portfolio website where I list all my projects aswell as contacts page with my google talks/gmail account. It can suit both fulltime job, freelance and own projects. So this way email and google talks is just enough. Am I right or not? Thanks in advance!

    Read the article

  • How do I restore the original color scheme, icons, and theme?

    - by katya sehgal
    I'd like the original colour scheme, icon style of 12.04. I somehow lost the Ambiance theme (possible error or upgrade error). I re-installed 'light-themes' from the terminal and got it back. But the panel on the top that shows the options of sound, battery and wi-fi has changed and I can-not get the original setting back. In the windows, the close, minimize tools have shifted to the right instead of the original left side. I had installed MyUnity and Ubuntu Tweak but deleted them. As such, I want the original setting back. Kindly help me with the commands. I have searched for solutions; there are multiple and I need to be sure if I should follow the same. Kindly bear before marking duplicate. Discoveries: The appearance is gray and boxy as outlined here. Not sure same problem. Similar 'gray and boxy' article here. Desktop forgets theme. I have also tried the unity --reset command. It never completes. I gave it 20 minutes.

    Read the article

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • (Abstract) Game engine design

    - by lukeluke
    I am writing a simple 2D game (for mobile platforms) for the first time. From an abstract point of view, i have the main player controlled by the human, the enemies, elments that will interact with the main player, other living elements that will be controlled by a simple AI (both enemies and non-enemies). The human player will be totally controlled by the player, the other actors will be controlled by AI. So i have a class CActor and a class CActorLogic to start with. I would define a CActor subclass CHero (the main player controlled with some input device). This class will probably implement some type of listener, in order to capture input events. The other players controlled by the AI will be probably a specific subclass of CActor (a subclass per-type, obviously). This seems to be reasonable. The CActor class should have a reference to a method of CActorLogic, that we will call something like CActorLogic::Advance() or similar. Actors should have a visual representation. I would introduce a CActorRepresentation class, with a method like Render() that will draw the actor (that is, the right frame of the right animation). Where to change the animation? Well, the actor logic method Advance() should take care of checking collisions and other things. I would like to discuss the design of a game engine (actors, entities, objects, messages, input handling, visualization of object states (that is, rendering, sound output and so on)) but not from a low level point of view, but from an high level point of view, like i have described above. My question is: is there any book/on line resource that will help me organize things (using an object oriented approach)? Thanks

    Read the article

  • Which Open Source Licenses can address concerns for an Open Source Game Engine?

    - by Chris
    I am on a team that is looking to open source an engine we are building. It's intended as an engine for Online RPG style games. We're writing it to work on both desktops and android platforms. I've been over to the OSI http://opensource.org/licenses/category to check out the most common licenses. However, this will be my first time going into an open source project and I wanted to know if the community had some insight into which licenses might be best suited. Key licensing concerns: Removing or limiting our liability (most already seem to cover this, but stating for completeness). We want other developers to be able to take part or all of our project and use it in their own projects with proper accreditation to our project. Licensing should not hinder someone's ability to quickly use the engine. They should be able to download a release and start using it without needing to wait on licensing issues. Game content (gfx, sound, etc.) that is not part of the engine should be allowed to be licensed separately. If someone is using our engine, they can retain full copy right of their content, including engine generated data. Our primary goal is exposure, which is why we're going open source to start with. Both for the project and for the individuals developing it. Are there any licenses that can require accreditation visible to players? While I'd put our primary goal as exposure, for licensing the accreditation is less of a concern. From what I've read through (and have been able to understand) it doesn't seem like any of the licenses cover anything that is produced by the licensed software. Are there any that state this specifically, or does simply not mentioning it leave it open for other licensing? Are there any other concerns that we should consider? Has anyone had any issues using any of these licenses?

    Read the article

  • In an Entity/Component system, can component data be implemented as a simple array of key-value pairs? [on hold]

    - by 010110110101
    I'm trying to wrap my head around how to organize components in an Entity Component Systems once everything in the current scene/level is loaded in memory. (I'm a hobbyist BTW) Some people seem to implement the Entity as an object that contains a list of of "Component" objects. Components contain data organized as an array of key-value pairs. Where the value is serialized "somehow". (pseudocode is loosely in C# for brevity) class Entity { Guid _id; List<Component> _components; } class Component { List<ComponentAttributeValue> _attributes; } class ComponentAttributeValue { string AttributeName; object AttributeValue; } Others describe Components as an in-memory "table". An entity acquires the component by having its key placed in a table. The attributes of the component-entity instance are like the columns in a table class Renderable_Component { List<RenderableComponentAttributeValue> _entities; } class RenderableComponentAttributeValue { Guid entityId; matrix4 transformation; // other stuff for rendering // everything is strongly typed } Others describe this actually as a table. (and such tables sound like an EAV database schema BTW) (and the value is serialized "somehow") Render_Component_Table ---------------- Entity Id Attribute Name Attribute Value and when brought into running code: class Entity { Guid _id; Dictionary<string, object> _attributes; } My specific question is: Given various components, (Renderable, Positionable, Explodeable, Hideable, etc) and given that each component has an attribute with a particular name, (TRANSLATION_MATRIX, PARTICLE_EMISSION_VELOCITY, CAN_HIDE, FAVORITE_COLOR, etc) should: an entity contain a list of components where each component, in turn, has their own array of named attributes with values serialized somehow or should components exist as in-memory tables of entity references and associated with each "row" there are "columns" representing the attribute with values that are specific to each entity instance and are strongly typed or all attributes be stored in an entity as a singular array of named attributes with values serialized somehow (could have name collisions) or something else???

    Read the article

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Advice: How to convince my newly annointed team lead against writing the code base from scratch

    - by shan23
    I work in a pretty reknowned MNC, and the module that I work in has been assigned to a new "lead". The code base is pretty huge (~130K or more, with inter dependencies on other modules) , but stable - some parts have grown ugly over the years, but its provably in working state. (Our products are running for years on them, even new ones). The problem is, our lead wants to rewrite the code from scratch, to encompass "finer granularity and a proactive design". I know in my guts thats not a very good idea, but how do I convince him/the rest of the team(who are pretty much more senior than me in terms of years of exp), without sounding too pedantic myself (Thou shalt not rewrite , as Joel et al have clear articles prohibiting it)? I have a good working relation with the person concerned, and don't want to ruin it, but neither do I want to be party to a decision which would surely plague us for years to come !! Any suggestions for a milder,yet effective approach ? Even accounts of how you have tackled such a situation to your liking would help me a lot! EDIT: The code base I'm talking about is not a product/GUI, but at kernel level with all the critical functionalities for our product. I hope now you know why i sound so apprehensive !!

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Why doesn't my IDE do background compiling/building?

    - by MKO
    Today I develop on a fairly complex computer, it has multiple cores, SSD drives and what not. Still, most of the time I'm programming the computer is leasurely doing nothing. When I need to compile and run/deploy a somewhat complex project at best it still takes a couple of seconds. Why? Now that we're living more and more in the "age of instant" why can't I press F5 in Visual studio and launch/deploy my application instantly? A couple of seconds might not sound so bad but it's still cognitive friction and time that adds up, and frankly it makes programming less fun. So how could compilation be instant? Well, People tend to edit files in different assemblies, what if Visual Studio/The IDE constantly did compilation/and building of everything that I modified anytime that it might be appropriate. Heck if they wanted to go really advanced they could do per-class compilation. The compilation might not work but then it could just silently do nothing (except adding error messages to the error window). Surely todays computer could dedicate a core or two to this task, and if someone found it annoying it could be disabled by option. I know there's probably a thousand technical issues and some fancy shadow copying that would need to be resolved for this to be seamless and practical but it sure would make programming more seamless. Is there any practical reason why this scenario isn't possible? Would the wear and tear of continually writing binaries be too much? Couldn't assemblies be held in memory until deployed/run?

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • Can't install Ubuntu, black screen after install

    - by Tyrone
    I tried several times, but wasn't able to make it work. I tried all recent types of Ubuntu but it didn't work. Then I tried acpi=off at the beginning of the installation. In this way I could finish the installation. But after the restart Ubuntu didn't work. Only a black screen appeared. Before that I tried it on the VirtualBox and it work. By the way my system is the following: (I use windows 7 currently) Processor AMD Athlon II P320 (2,1 GHz, second-level cache 2 ? 512 KB, HT 1600 MHz bus) Chipset AMD M880G + SB850 Memory Dual Channel, 3 GB DDR3-1066 Wide Screen 15.6 “high-definition (1366 ? 768) c LED-backlit, AU Optronics B156XW02 Video Card AMD Radeon HD 4250, from 336 MB video buffer in memory, support for DirectX 10.1 and UVD Sound system: HDA-codec IDT 92HD81B1X AMD HDMI Audio Hard drive WDC WD3200BEVT-75A23T0 (298 GB, 5400 RPM, SATA 2.0) Optical Drive: DVD ± RW Optiarc AD-7585H Communication tools Fast Ethernet (10/100 Mbit / c) Realtek RTL8102E/RTL8103E WiFi 802.11a/b/g Broadcom BCM4310 Bluetooth 2.1 + EDR Card reader Memory Card Reader 7-in-1 with support for SD / SDHC / MMC / MS / xD, and derivatives Interfaces / ports 3 USB 2.0 1 eSATA + USB 2.0 15-pin video connector VGA HDMI RJ-45 Ethernet 10/100 Mbit / c 2 analog mini-jack: a microphone / headphone jack for a Kensington lock slot AC adapter Battery Li-Ion 6-cell capacity of 4400 mA ? h (10,8, 48 W ? hr) AC power adapter 65 Watt Additional equipment integrated web-camera (1.3 mega pixels)

    Read the article

  • How to highlight non-rectangular hotspots?

    - by HuseyinUslu
    So my question is highly related to Creating non-rectangular hotspots and detecting clicks. Yet again, I've irregular hot-spots (think the game Risk). So basically, we can detect clicks on these hot-spots easily using color key mapping as discussed in above question which I don't have any problems implementing (which is also covered here in details). The problem is about highlighting these irreguar hotspots. So let me explain the question a bit more - the above color key mapping guide uses this as a world map: Then the author color-maps the imaginary countries: Now we can now detect the country the pointer is over. In the same article author mentions outlining countries on mouse-over. Though to get the effect, he creates unique border assets for each country - like: For the game I'm working on I'm using the same color-key mapping idea to detect hot-spots, but I didn't like the way of highlighting hot-spots. Coloring all the hot-spots is already a time-consuming job for me - as I have 25+ hot-spots for each map. Further, the need to have 25 unique border/highlight asset per hot-spot doesn't sound right. Anyone have a better idea/suggestion on highlighting these hot-spots?

    Read the article

  • Farmyard

    - by Richard Jones
    Moooooooo     For a while now we’ve been using Apple’s enterprise device app distribution mechanism.   This allows you to have a user, click on a URL on their iOS device and it pulls down a new version of an enterprise app. of of our servers. Its really nice,  have a look at - http://developer.apple.com/library/ios/#featuredarticles/FA_Wireless_Enterprise_App_Distribution/Introduction/Introduction.html   I’ve embedded this, into a check on application launch, that a web-service is called to detect a newer version of the software is available.  It then calls the URL to the App and a new version is deployed. You can alert users that a new App update is available by sending them a push notification.  See screenshot at the top. We send our push notifications out to users,  using a simple C# service.    The fun part is this.   You can instruct the push notification to play a sound (embedded in the app already). So our push notification’s play a random farmyard noise, i.e from a selection of - cow.wav dogbrk.wav duck.wav goose.wav horse.wav lamb.wav monkey.wav – left field I know rooster.wav Imagine my amusement being able to periodically send out an update and watch our office (of about 60 people) turn into farm for a few seconds. I’ve messed up a few times, with people being interrupted on customer conference calls,  but people seem good humoured about it. (so far) Simple(ish) pleasures…

    Read the article

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • Problems with Ubuntu and AMD A10-4655M APU

    - by Robert Hanks
    I have a new HP Sleekbook 6z with AMD A10-4655M APU. I tried installing Ubuntu with wubi--the first attempt ended up with a 'AMD unsupported hardware' watermark that I wasn't able to remove (the appeared when I tried to update the drivers as Ubuntu suggested) On the second attempted install Ubuntu installed (I stayed away from the suggested drivers) but the performance was extremely poor----as in Windows Vista poor. I am not sure what the solution is--if I need to wait until there is a kernel update with Ubuntu or if there are other solutions--I realise this is a new APU for the market. I would love to have Ubuntu 12.04 up and running--Windows 7 does very well with this new processor so Ubuntu should, well, be lightening speed. The trial on the Sleekbook with Ubuntu 12.10 Alpha 2 release was a complete failure. I created a bootable USB. By using either the 'Try Ubuntu' or 'Install Ubuntu' options resulted in the usual purple Ubuntu splash screen, followed by nothing...as in a black screen without any hint of life. Interestingly one can hear the Ubuntu intro sound. In case you are wondering, this same USB was trialed subsequently on another computer with and Intel Atom Processor. Worked flawlessly. Lastly the second trial on the Sleekbook resulted in the same results as the first paragraph. Perhaps 12.10 Beta will overcome this issue, or the finalised 12.10 release in October. I don't have the expertise to know what the cause of the behaviour is-the issue could be something else entirely. Sadly, the Windows 7 performance is very good with this processor-very similar and in some instances better to the 2nd generation Intel i5 based computer I use at my workplace. Whatever the cause is for the performance with Ubuntu 12.04 or 12.10 Alpha 2, the situation doesn't bode well for Ubuntu. Ubuntu aside, the HP Sleekbook is a good performer for the price. I am certain once the Ubuntu issue is worked on and solutions arise, the Ubuntu performance will probably be better than ever.

    Read the article

  • How do I change Clementine's play/pause indicator icons?

    - by MHC
    This is how the Clementine indicator displays play/pause: It's a minor detail, but I feel that the play and pause icons just don't go with the monochrome design of the panel. In order to change them I tried to locate all files associated with clementine, but to no avail. Here's the output: /home/user/.config/Clementine/clementine.db /usr/bin/clementine /usr/share/app-install/desktop/clementine:clementine.desktop /usr/share/app-install/icons/application-x-clementine.png /usr/share/applications/clementine.desktop /usr/share/doc/clementine /usr/share/doc/clementine/README.Debian /usr/share/doc/clementine/changelog.Debian.gz /usr/share/doc/clementine/copyright /usr/share/icons/hicolor/64x64/apps/application-x-clementine.png /usr/share/icons/hicolor/scalable/apps/application-x-clementine.svg /usr/share/icons/ubuntu-mono-dark/apps/24/clementine-panel-grey.png /usr/share/icons/ubuntu-mono-dark/apps/24/clementine-panel.png /usr/share/icons/ubuntu-mono-light/apps/24/clementine-panel-grey.png /usr/share/icons/ubuntu-mono-light/apps/24/clementine-panel.png /usr/share/man/man1/clementine.1.gz /usr/share/menu/clementine /usr/share/pixmaps/clementine-16.xpm /usr/share/pixmaps/clementine.xpm /var/lib/dpkg/info/clementine.list /var/lib/dpkg/info/clementine.md5sums /var/lib/dpkg/info/clementine.postinst /var/lib/dpkg/info/clementine.postrm /var/lib/menu-xdg/applications/menu-xdg/X-Debian-Applications-Sound-clementine.desktop Can anyone tell me where to find these icons and how to change them?

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • Automatic Generalization

    - by Nick Harrison
    I have been interested in functional programming since college. I played around a little with LISP back then, but I have not had an opportunity since then. Now that F# ships standard with VS 2010, I figured now is my chance. So, I was reading up on it a little over the weekend when I came across a very interesting topic. F# includes a concept called "Automatic Generalization". As I understand it, the compiler will look at your method and analyze how you are using parameters. It will automatically switch to a generic parameter if it is possible based on your usage. Wow! I am looking forward to playing with this. I have long been an advocate of using the most generic types possible especially when developing library classes. Use the highest level base class that you can get away with. Use an interface instead of a specific implementation. I don't advocate passing object around, but you get the idea. Tools like resharper, fxCop, and most static code analysis tools provide guidance to help you identify when a more generalized type is possible, but this is the first time I have heard about the compiler taking matters into its own hands. I like the sound of this. We'll see if it is a good idea or not. What are your thoughts? Am I missing the mark on what Automatic Generalization does in F#? How would this work in C#? Do you see any problems with this?

    Read the article

  • Impossible to select folders and files with mouse (Ubuntu 12.04)

    - by François
    First-time post for me here (after being a regular reader for two years though) so thank you all for the quality of replies and help provided. My problem is very simple apparently but a tricky one. I just installed the Ubuntu 12.04(1) along with the Gnome3-shell environment on my new pc desktop Acer Aspire X3995 (see config below). Everything work (more or less) so far (I still have problems of sound and disabled 2-fingers gestures with my screen -- which I will have to deal with xconfig settings I think -- though), but the main problem is that I cannot select files/folders with my USB mouse. When I try to double click on them, nothing happen (sometimes one folder or file is selected but then unselected again). Note that the navigation works perfectly from the USB keyboard and from the touch-screen (I am using a 23" wide touch-screen Acer Monitor T231Hbmid). Also, the mouse works perfectly with other menu navigation, with the only difference that the text of certain menus is selected as if I was holding the left click on them. So I assume the problem is only related to the mouse. Needless to say that the usual basic hardware checks have been performed (unplugging, powered-off, etc.). My level is simply "advanced user", meaning that if you provide me with intelligible input I should find my way, but please don't expect too much technical/specific knowledge... :) Please let me know if you need more information on this bug. Now, fingers crossed... and thanks in advance! Ciao, François Config of Acer Aspire X3995: Ubuntu 12.04 / Gnome3-shell environment / Intel Core i5 3450 / nVidia GeForce 605, 1Gb. Screen: Acer Monitor TFT 23" wide T231Hbmid

    Read the article

  • Now Available: Profit November 2012

    - by user462779
    The November 2012 issue of Profit is now available. In the five years I've worked on Profit, there has been measurable interest in content related to project management. Stories featuring project management as a key component have resulted in extra clicks, likes, and RTs (for you Twitter users) from our readers. I've chatted about this with Oracle customers, partners, and experts and received an assortment of ideas about why this might be. This issue of Profit is a bit of a culmination of those conversations, and the trends that are driving interest in project management best practices. Also, two online developments for Profit: check out my newly relaunched blog, Editor's Notebook, at blogs.oracle.com/profit, where readers can get a peek at the development of each issue of Profit as it happens. We've also launched a new LinkedIn group for our social media-inclined readers. In this issue: Three Keys to Project Management What can organizations with world-class project management teach the rest of us? Strong Medicine Gilead Sciences simplifies business processes to establish a foundation for continued growth. Architects of Reform Enterprise architecture plays an essential role in establishing Oregon as a leader in healthcare reform. Answering the Call Turkcell CIO Ilker Kuruoz finds IT-powered growth and innovation to be the calling card for success. Projected Results Sound project management practices and technology can have an immediate impact on the bottom line. Preparing for Impact Plans for dealing with enterprise information will define the big data winners. Is one issue of Profit not enough to get you through to February? Visit the Profit archives, or follow @OracleProfit on Twitter for a daily dose of enterprise technology news from Profit.

    Read the article

  • AllSparkCube Packs 4,096 LEDs into a Giant Computer Controlled Display

    - by Jason Fitzpatrick
    LED matrix cubes are nothing new, but this 16x16x16 monster towers over the tiny 4x4x4 desktop variety. Check out the video to see it in action. Sound warning: the music starts off very loud and bass-filled; we’d recommend turning down the speakers if you’re watching from your cube. So what compels someone to build a giant LED cube driven by over a dozen Arduino shields? If you’re the employees at Adaptive Computing, you do it to dazzles crowds and show off your organizational skills: Every time I talk about the All Spark Cube people ask “so what does it do?” The features of the All Spark are the reason it was built and sponsored by Adaptive Computing. The Cube was built to catch peoples’ attention and to demonstrate how Adaptive can take a chaotic mess and inject order, structure and efficiency. We wrote several examples of how the All Spark Cube can demonstrate the effectiveness of a complex data center. If you’re interested in building a monster of your own, hit up the link below for more information, schematics, and videos. How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • Is creating a separate pool for each individual image created from a png appropriate?

    - by Panzercrisis
    I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = "/../assets/images/game/misc/red_door.png")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector.<Image> = new Vector.<Image>(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things?

    Read the article

  • What data should be cached in a multiplayer server, relative to AI and players?

    - by DevilWithin
    In a virtual place, fully network driven, with an arbitrary number of players and an arbitrary number of enemies, what data should be cached in the server memory, in order to optimize smooth AI simulation? Trying to explain, lets say player A sees player B to E, and enemy A to G. Each of those players, see player A, but not necessarily each other. Same applies to enemies. Think of this question from a topdown perspective please. In many cases, for example, when a player shoots his gun, the server handles the sound as a radial "signal" that every other entity within reach "hear" and react upon. Doing these searches all the time for a whole area, containing possibly a lot of unrelated players and enemies, seems to be an issue, when the budget for each AI agent is so small. Should every entity cache whatever enters and exits from its radius of awareness? Is there a great way to trace the entities close by without flooding the memory with such caches? What about other AI related problems that may arise, after assuming the previous one works well? We're talking about environments with possibly hundreds of enemies, a swarm.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >