Search Results

Search found 18775 results on 751 pages for 'old hardware'.

Page 182/751 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Question on methods in Object Oriented Programming

    - by mal
    I’m learning Java at the minute (first language), and as a project I’m looking at developing a simple puzzle game. My question relates to the methods within a class. I have my Block type class; it has its many attributes, set methods, get methods and just plain methods. There are quite a few. Then I have my main board class. At the moment it does most of the logic, positioning of sprites collision detection and then draws the sprites etc... As I am learning to program as much as I’m learning to program games I’m curious to know how much code is typically acceptable within a given method. Is there such thing as having too many methods? All my draw functionality happens in one method, should I break this into a few ‘sub’ methods? My thinking is if I find at a later stage that the for loop I’m using to cycle through the array of sprites searching for collisions in the spriteCollision() method is inefficient I code a new method and just replace the old method calls with the new one, leaving the old code intact. Is it bad practice to have a method that contains one if statement, and place the call for that method in the for loop? I’m very much in the early stages of coding/designing and I need all the help I can get! I find it a little intimidating when people are talking about throwing together a prototype in a day too! Can’t wait until I’m that good!

    Read the article

  • How do you track existing requirements over time?

    - by CaptainAwesomePants
    I'm a software engineer working on a complex, ongoing website. It has a lot of moving parts and a small team of UI designers and business folks adding new features and tweaking old ones. Over the last year or so, we've added hundreds of interesting little edge cases. Planning, implementing, and testing them is not a problem. The problem comes later, when we want to refactor or add another new feature. Nobody remembers half of the old features and edge cases from a year ago. When we want to add a new change, we notice that code does all sorts of things in there, and we're not entirely sure which things are intentional requirements and which are meaningless side effects. Did someone last year request that the login token was supposed to only be valid for 30 minutes, or did some programmers just pick a sensible default? Can we change it? Back when the product was first envisioned, we created some documentation describing how the site worked. Since then we created a few additional documents describing new features, but nobody ever goes back and updates those documents when new features are requested, so the only authoritative documentation is the code itself. But the code provides no justification, no reason for its actions: only the how, never the why. What do other long-running teams do to keep track of what the requirements were and why?

    Read the article

  • Change XAMPP's htdocs web root folder to another one

    - by vitto
    I'm trying to change the XAMPP's web root default directory /opt/lampp/htocs to another one like /home/me/Dropbox/public_html without success. I've edited the file /opt/lampp/etc/httpd.conf # old line: DocumentRoot "/opt/lampp/htdocs" DocumentRoot "/home/me/Dropbox/public_html" #...etc... # old line: <Directory "/opt/lampp/htdocs"> <Directory "/home/me/Dropbox/Work/public_html"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # etc... I've did this as said in this article: Using Ubuntu One to synchronise htdocs? Then I've restarted Apache and I've got a permission error 403 on every page I've called with the web browser. So I've changed folder and files permission to 755. I've did this as said in this article: What file permissions should I set on web root? The problem still remains the same, I have the 403 error on every page I try to reach with the web browser. I have the same problem on a Mac using XAMPP. So everythig works fine if the folder remains the original /opt/lampp/htocs. How can I change it correctly?

    Read the article

  • I can't hear any sounds on ubuntu 11.10 on Dell inspiron N5010

    - by Ahmed
    I have a problem that I can't hear any sounds and I don't know where to start. I did the following : lspci -v | grep -A7 -i "audio" 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) Subsystem: Dell Device 0447 Flags: bus master, fast devsel, latency 0, IRQ 48 Memory at fbf00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel -- 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Dell Device 0447 Flags: bus master, fast devsel, latency 0, IRQ 49 Memory at fbe40000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel And It seems that I have 2 soundcards. Is that normal ?? I also did this: aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: Generic [HD-Audio Generic], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 Also on the sound setting GUI. I have 2 hardware profiles for sound cards but none of them works when I test the speakers. Where should I start searching ?

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Never before had a problem with Ubuntu desktop graphical display; Trying to use nvidia GT630

    - by focaccio
    I've been using ubuntu since 9.04 and never had a problem with Ubuntu brining up the desktop graphical user interface. However I am currently not able to see anything graphical past the install screens. I have an Intel DP55KG motherboard and just installed an nvidia gt630 graphics card (zotac), since the old graphics card failed. I can install the server and see text. So I do a apt-get install ubuntu-desktop...or apt-get install kubuntu-desktop...or apt-get install xubuntu desktop, but after the reboot there is no display...its like something is hung up. I tried using the Live quantal dvd and I do see the graphical prompt to try without installing, but after that the screen goes blank. I've tried two monitors and the same thing happens. There is a faint "glow" on the screen and I do not get a "no input signal" from the monitor, so something is happening. I can install an old OEM of XP so I know the video card and motherboard are at least semi functional. Any help is appreciated. Thanks, Greg

    Read the article

  • Randomly Freezes - How Can I Diagnose the Problem?

    - by j0rd4n
    At random times, Ubuntu 10.04 freezes, and I have to do a hard shutdown. It was upgraded from 9.10 which didn't freeze. First, is this is common problem with a quick answer, and if not, what can I do to diagnose it? I've tried checking application/kernel logs, but nothing gives me a clue as to what caused the problem. My guess, is that since the OS froze, no logs could be updated. Ideas? SOLUTION: Solved it. My particular problem was my graphics card (integrated Radeon 9000 series). netconsole revealed I was getting the error: "reserve failed for wait". After trial-and-error, I manually configured my video card and disabled hardware acceleration. Completely fixed the issue. Here is what I did: Manually Created xorg.conf Ubuntu automatically configures xorg.conf and doesn't use a file. To edit this file, you have to tell Ubuntu to explicitly create one and then edit it. Here are the steps: Restart system Hold Shift as GRUB boots Select root terminal in GRUB login menu Execute: X -config xorg.conf.new Copy: cp xorg.conf.new /etc/X11/xorg.conf Disable Hardware Acceleration The following is specific to my Radeon card, but I'm sure other cards have a similar setup. Edit xorg.conf Find "Device" section for graphics card Uncomment "NoAccel" option and set to "True" Save + reboot Hope that helps.

    Read the article

  • Brightness going up to 100% on loading certain websites in Chrome

    - by picheto
    I'm using Google Chrome version 21.0.1180.89 on Ubuntu 12.04 and my laptop is a Sony VAIO VPCCW15FL (spec sheet). My video driver is the propietary "NVIDIA accelerated graphics driver (post-release updates)(version-current updates)". After installing Ubuntu, I discovered that neither the brightness control buttons (hardware) or the brightness slider (software) worked, and found out I could get the hardware buttons to work by installing the nvidiabl.deb package and oBacklight script. I'm using nvidiabl-dkms 0.77 and oBacklight 0.3.8. Still, the slider on the Ubuntu "Settings" does not work, but I don't care. There is an annoying thing happening when loading certain pages in Google Chrome: the brightness goes up to 100% when loading the webpage or when leaving it (closing the tab or typing a different URL on the omnibox). However, the "brightness tooltip" (that default brightness notification) remembers the position it was set to, so if I adjust the brightness with the HW buttons, the level gets adjusted relative to the value it was set to before "going 100%". I disabled the flash PPAPI plugin, but left the NPAPI plugin enabled, and the problem went away for pages with flash content. Still, the same thing happens when viewing HTML5 video, or when loading, for example, the Chrome Web Store or using the Scratchpad extension. I suppose it has to do with the rendering of certain elements using the GPU, but this is just a guess. This brightness thing does not happen when using Firefox 15.0 or any other application I have used yet. Does anybody know why this may be happening and what could I do to fix this without changing browser? Thanks a lot.

    Read the article

  • No MAU required on a T4

    - by jsavit
    Cryptic background One of the powerful features of the T-series servers is its hardware crypto acceleration, which dramatically speeds up the compute intensive algorithms needed to encrypt and decrypt data. Previously, administrators setting up logical domains on older T-series servers had to explicitly assign crypto resources (called "MAU" for historical reasons from the T1 chip that had "modular arithmetic units") to domains that had a significant crypto workload (say, an SSL based web server). This could be an administrative burden, as you had to choose which domains got the crypto units, and issue the appropriate ldm set-mau N mydomain commands. The T4 changes things The T4 is fast. Really fast. Its clock rate and out-of-order (OOO) execution that provides the single-thread performance that T-series machines previously did not have. If you have any preconceptions about T-series performance, or SPARC in general, based on the older servers (which, it must be said, were absolutely outstanding for multi-threaded applications), those assumptions are now obsolete. The T4 provides outstanding. performance for all kinds of workload, as illustrated at https://blogs.oracle.com/bestperf. While we all focused on this (did I mention the T4 is fast?), another feature of the T4 went largely unnoticed: The T4 servers have crypto acceleration "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". This is way way better since you have crypto everywhere by default without having to manage it like a discrete and limited resource. It's a feature of the processor, like doing an integer add. With T4, there is no management necessary, you just have HW crypto everywhere all the time seamlessly. This change hasn't been widely advertised, and some administrators have wondered why there were unable to assign a MAU to a domain as they did with T2 and T3 machines. The answer is that there is no longer any separate MAU, so you don't have to take any action at all - just leave the default of 0. Summary Besides being much faster than its predecessors, the T4 also integrates hardware crypto acceleration so its seamlessly available to applications, whether domains are being used or not. Administrators no longer have to control how they are allocated - it "just happens"

    Read the article

  • Game crash/Screen freeze recovery (without shell or reboot)

    - by Asavar Tzeth
    I am an old Windows PC gamer, now converted into Ubuntu (Linux) lover. I am even going so far as to attempt to replace all my games in a Windows dual-boot with Wine and it is going well. However... Even if Linux is less prone to crashing, games, especially the windows ones (but also a few native) can crash. My problem is when this is in full screen and the computer becomes non-responsive. In Windows you can solve this with ctrl+alt+delete, but Ubuntu lacks this feature and my only choice is a reboot. Is there any Ubuntu version of this feature? Of course excepting the ctrl+alt+F1, find and kill process method. It is fine if you know how to do it, but too slow and difficult for the typical gamer. I believe strongly in Ubuntu as the future gaming platform in one form or another. If this feature does not exist, then the Ubuntu team should address this as fast as possible, since it is critical for all old Windows gamers. Thank you for your time. Asavar Tzeth (Alias)

    Read the article

  • 10.04 Window manager not working

    - by jackg
    Using an old mx200 128Mb AGP card. Log in ok. Sometimes the top and bottom bars do not appear, sometimes one sometimes both sometimes none. The menus on firefox/thunderbird and others disappear when I move the pointer from the menu heading to the menu itself. I can't play you tube videos, nor pacman, so the world has ended as I know it. If I type sudo metacity --replace in a terminal the window manager seems to work fine. But I don't know how to make this permanent. One option was: System menupreferencessessions In the sessions tab make sure that "automatically save changes to session" is checked. I don't have a sessions option in the preferences menu. So...? Must be a line of code in a terminal I can use to get round this. I have not upgraded to Ubuntu 11... because the graphics card is so old that I cannot get any decent screen resolutions when I do. On 10.04 I disable the Nvidia driver for the same reason and use 1024x768. Ta

    Read the article

  • Dim (NEARLY blank) laptop screen, secondary screen works - why?

    - by LIttle Ancient Forest Kami
    My laptop screen is (almost) black while my secondary screen is fine. I believe it to be backlight / brightness related. Problem description it starts when I start the laptop system loads and works fine, just screen has problems I can see the screen though very faintly / dimly - it's hard to see anything which ain't very white e.g. starting screen has big Thinkpad logo in white, large font - I can see it, though very dimly second screen works very well Official backligtht debugging: using acpi setting as prescribed there for Thinkpads didn't help I can see an entry in /sys/class/backlight/ and it changes when I press hotkeys for brightness (current backlight power for instance goes up or down) acpi-off didn't helpm neither did acpi_backlight=vendor Hardware data Laptop is Thinkpad Edge with glossy screen. 4 processors, 2 cores, exemplary CPU data from cat /proc/cpuinfo reports Genuine Intel i5 (M 480 @ 2.67GHz). OS is Ubuntu Lucid, 10.04 LTS, 64-bit, with Linux generic kernel (2.6.32-44) and GNOME 2.32.2 (though I doubt there lies the problem). $ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc M92 [Mobility Radeon HD 4500 Series] $ lshw -C display *-display description: VGA compatible controller product: M92 [Mobility Radeon HD 4500 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:33 memory:c0000000-dfffffff(prefetchable) ioport:2000(size=256) memory:f0300000-f030ffff memory:f0320000-f033ffff(prefetchable) Driver I was NOT running any proprietary drivers, just checked with "Hardware drivers". There is one for ATI that is suggested there, though I didn't need it so far. UPDATE: changing the driver to proprietary one (ATI/AMD FGLRX) didn't help. Tried and failed Resetting / running on power or battery / charging / getting rid of static electricity / warming up *doesn't help* This is NOT a blank-screen problem, at least it isn't following official Ubuntu black-screen diagnostics - I can see my screen, though barely. What I will try next: - check last updates I've made - IIRC I am running on nomodeset already, but will verify this Any ideas how to proceed best? What is most probable cause?

    Read the article

  • Ubuntu 12.04 LTX Install Problems (See post for system build details.)

    - by Lokitez
    This is my first ever attempt at working with Ubuntu. I have only ever installed Windows in the past and that may be the problem. I purchased all new hardware this week and I would really like to give Ubuntu a chance (especially since I don't want to buy another Windows license). First, the hardware: AMD FX-8150 Zambezi 3.6GHz Socket AM3+ 125W Eight-Core Desktop Processor ASUS Crosshair V Formula AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD Gaming Motherboard SAMSUNG 830 Series MZ-7PC128D/AM 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) - This is my intended boot drive. Western Digital VelociRaptor WD5000HHTZ 500GB 10000 RPM SATA 6.0Gb/s 3.5" Internal Hard Drive - This is a backup drive that I have installed Windows Vista on until I can get Ubuntu to work. G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 I have downloaded and tried to install both Ubuntu 64 bit and Kubuntu 64 bit (both 12.04). Both will always fail to copy a file during install or otherwise lockup during install to the SSD. I have burned two copies of the Ubuntu 12.04 and had the install fail with both. I have installed Vista onto the HDD. Is it possible to mount the Ubuntu file into

    Read the article

  • Facebook - Isn't this a big vulnerability risk for users? (After Password Change)

    - by Trufa
    I would like to know you opinions as programmers / developers. When I changed my Facebook password yesterday, by mistake I entered the old one and got this: Am I missing something here or this is a big potencial risk for users. In my opinion this is a problem BECAUSE it is FaceBook and is used by, well, everyone and the latest statistics show that 76.3% of the users are idiots [source:me], that is more that 3/4!! All kidding aside: Isn't this useful information for an attacker? It reveals private information about the user! It could help the attacker gain access to another site in which the user used the same password Granted, you should't use use the same password twice (but remember: 76.3%!!!) Doesn't this simply increase the surface area for attackers? It increases the chances of getting useful information at least. In a site like Facebook 1st choice for hackers and (bad) people interested in valued personal information shouldn't anything increasing the chance of a vulnerability be removed? Am I missing something? Am I being paranoid? Will 76.3% of the accounts will be hacked after this post? Thanks in advance!! BTW if you want to try it out, a dummy account: user: [email protected] (old) password: hunter2

    Read the article

  • If I send an IPA over TestFlight, can it be used to deploy to the app store?

    - by Reid Belton
    I am currently working for a small startup. I was previously under contract, now I am working for equity (no pay). The thing is, there is not yet a signed agreement in place as the details are being worked out. I may finish development before the contract is ready. I'm not currently under any contract or agreement, so the other party doesn't have any legal claim (that I know of) to the code I'm writing now, other than NDA (which just precludes me from cutting him out and releasing on my own). He already has the old code that I wrote under contract. I've made it clear to the other party that I won't submit the app or turn over the code until there's something signed to protect my interests. I've stopped pushing commits to the company repo (I'm now the only developer actively working on the project). However, I would still like to send builds over TestFlight for feedback and testing purposes. The other party has access to the developer portal and iTunes Connect for code signing, etc. Things are amicable and I don't foresee getting burnt on this, but I'm not going to put myself in that position. My concern is that if I send a finished build via TestFlight, it could be extracted and submitted to the app store without my participation. They wouldn't have the source for future maintenance and updates, of course, but it could be reverse-engineered by another developer later working from the old code base. Is this technically feasible at all? If so, is there a way I can send builds for testing while protecting my interests?

    Read the article

  • Spring Cleaning

    - by Tim Dexter
    I recently got a shiny new laptop; moving my shiz from old to new, was not the nightmare it used to be. I have gotten into the habit of using a second hard drive in the media bay where the CDROM normally sits. That drive contains my life's work with BIP. I can pull it out and plug it into another machine very easily. I have been sorting through some old directories and files, archiving some, sharing others with colleagues. For instance, a little dated but if you were looking for a list of Publisher reports available in EBS R12.1, here it is. Im trying to track down a more recent R12 instance and will re-post the document. I also found another gem; its a little out there in terms of usefulness but Im sharing it none the less. You can embed, locally or remotely reference SVG graphics (in XML format) and bring the images into the BIP outputs. Template and sample data here. A nice set of templates showing page number control and page suppression - they will need some explanation, so I'll save them for another post. The list goes on but I'll save them for later. Back to the clean up!

    Read the article

  • Amazon EC2 vs Dedicated server at Hetzner, what's the use for EC2?

    - by C-Blu
    After searching the web I still can't find the reason to use EC2. What's the point to scale EC2? If you expect a huge burst in traffic, they say. OK, but what if you already have a couple of sites with good traffic, and for example medium reserved EC2 instance is not enough. You are paying $36.60(medium reserved for 1year) in EU(Ireland) + traffic + optional expenses for databases and S3 if you use them. Of course as some point when you are under $56.6-$66.1 you can optimize your hosting costs with Amazon EC2. But when you get at some point if purchase EX4 server from Hetzner, it will surpass your perfomance needs for a long time, before you get a massive traffic. (I am wrong?) CPU: i7-2600 Quadcore (3.4-3.8 Ghz) RAM: 16 GB HDD: 2x3 TB SATA (6 Gbit/s) - I think that disc performance of a dedicated is better then of Amazon EBS Traffic: 10 TiB in month included. This is what you get from Hetzner for $56(- 19% VAT) or $66 for EU residents. Please, tell me what's the reason to use Amazon? Which load won't a server from Hetzner take, but Amazon Auto Scaling will? The maintenance of dedicated vs EC2 is still the same? Or hardware failure at Amazon, won't ruin your EBS storage? I'm still not at the level when I need expensive hosting, but want to know beforehand, just to be sure if Amazon infrastructure is better then pure performance of Hetzner's hardware.

    Read the article

  • EOFs in Solaris 11

    - by nospam(at)example.com (Joerg Moellenkamp)
    Well ? from comments here and elsewhere, the two most worst things seemed to be the the removal of 32-bit support and removal of support for certain components. Just to set things into perspective: Solaris 10 was released 2005, the newsest class of machines not supported by it were the Ultra1. This one was released 1995. The UltraSPARC-Systems not able to run on Solaris 11 were released 2001. Well ? we have 2011 now ?. Regarding 32-bit support: Well ? I don't think "playing around with Solaris on old gear" is the problem. At first, most people are playing around with virtual machines. But there is something different: 64-bit computing was introduced for x86 in 2003 (yes ? it's really that old). I think this move is more hurting to the people using boards with the first-gen Intel Atom "Silverthorne" as small file servers. And then Solaris 10 won't disappear with Solaris 11

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • LiveCD not booting/can't install Ubuntu 11.04

    - by user20318
    So, i got a new laptop somedays ago and as usualy, i went formated it to install Ubuntu. Download 11.04 and burned it on my pendrive using my old laptop (running 11.04). When i tryed to boot from the LiveUSB on my new laptop, it just showed me some weird graphics and if i select any option (can't see what im selecting), it gives me a black screen and that is all. Then i tryed to boot with this LiveUSB on my old laptop, and it worked just fine ._. Burned a CD with Ubuntu 11.04 (64bits) and the problem continue. Then i tought it could be my CD Driver, since the laptop is new and all... burned a Windows 7 64 bits DVD and it worked just fine. Also, if i check the CD/Pendrive inside Windows Seven, all the files there are ok. Anyone have any idea of what can be? I found lots of questions about this, but none of them had the weird menus i'm getting ._. oohh... i also get a "prefix is not set" before the weird menu appears :S My sis specs: Intel Core i5 2400 Intel HD 3000 4gb DDR3 If anyone can help, i will be really greatfull ._.

    Read the article

  • Can I upgrade my ubuntu version and change to be primary OS after originally installing with Wibu?

    - by Garrick Wann
    I have recently installed Ubuntu 12.04 using Wubi 12.04 and I now wish to upgrade to a full installation of Ubuntu 14.04, Before attempting to upgrade through the update center I did some research on upgrading from a Wubi installation (alongside windows) to a full installation making Ubuntu primary and only OS and found that it is in fact doable through the update center however it is just highly recommended to perform a full backup before doing so. I have now finished backing up all the data I need to worry about and began the upgrade process through the update center and received the following error: Your graphics hardware may not be fully supported in Ubuntu 14.04. Running the 'unity' desktop environment is not fully supported by your graphics hardware. You will maybe end up in a very slow environment after the upgrade. Our advice is to keep the LTS version for now. For more information see https://wiki.ubuntu.com/X/Bugs/UpdateManagerWarningForUnity3D Do you still want to continue with the upgrade? My questions are as follows: A. Isnt 14.04 a LTS version??? B, What are your recomendations in order to ensure my graphics driver is installed correctly and im not stuck with bad configs/install?

    Read the article

  • Why do VMs need to be "stack machines" or "register machines" etc.?

    - by Prog
    (This is an extremely newbie-ish question). I've been studying a little about Virtual Machines. Turns out a lot of them are designed very similarly to physical or theoretical computers. I read that the JVM for example, is a 'stack machine'. What that means (and correct me if I'm wrong) is that it stores all of it's 'temporary memory' on a stack, and makes operations on this stack for all of it's opcodes. For example, the source code 2 + 3 will be translated to bytecode similar to: push 2 push 3 add My question is this: JVMs are probably written using C/C++ and such. If so, why doesn't the JVM execute the following C code: 2 + 3..? I mean, why does it need a stack, or in other VMs 'registers' - like in a physical computer? The underlying physical CPU takes care of all of this. Why don't VM writers simply execute the interpreted bytecode with 'usual' instructions in the language the VM is programmed with? Why do VMs need to emulate hardware, when the actual hardware already does this for us? Again, very newbie-ish questions. Thanks for your help

    Read the article

  • Cannot submit change of address to subdomain in Google Webmaster Tools?

    - by RCNeil
    I am pointing several domains to one URL, a URL which happens to include a subdomain. ALL of the domains are using 301 redirects to point to this new address. One of the older domains (which used to be a site) is a 'property' in Webmaster Tools, as is the new site (the one with the subdomain.) When registering a 'Change of Address' for the old site with WebmasterTools, it suggests the following method - Set up your content on your new domain. (done) Redirect content from your old site using 301 redirects. (done) Add and verify your new site to Webmaster Tools. (done) Then, directly below that, to proceed, it says Tell us the URL of your new domain: Your account doesn't contain any sites we can use for a change of address. Add and verify the new site, then try again. I have already submitted and verified the new site. The only reason I can fathom I am getting this error is because the new site includes a subdomain. Although I don't foresee getting punished for this, as I am correctly 301 redirecting traffic anyway, I'm curious as to why the Change of Address submission isn't working appropriately for me. Has anyone else had experience with this?

    Read the article

  • Dell Powerdge840 2.4GHz 64 bit quad core

    - by newb64bit
    I am having an issue, where I have changed the boot order to cdrom and turned off hd boot all together and still my system is unable to detect ubuntu and claims, no boot device found. Some additional information: When this same cd is inserted and dell is booted into win 2003 server (which is what is installed on this machine), it detects the cd drive but not the cd at all (keeps asking me to insert disc) I have also created a bootable flash drive using LinuxLive USB creator and when this is selected in boot order again am told no boot device detected. I was speaking to dell and they suggested perhaps there are no drivers on the actual ubuntu installation for the hardware on this Dell and hence the failure of this dell to detect the ubuntu cd. Now, I don't know too much about computers, but this last bit confused me a bit. If the system detects the hardware (when it is booting it sees the cd rom and in bios it sees when the flash drive is connected), then shouldn't it be able to read what is on those drives? However, if there is some firmware or software install that needs to happen, could someone please tell me where to find the correct drivers for ubuntu and dell poweredge to work together? Shall I be installing the desktop version or the server edition, also, 32 bit or 63 bit? Thank you in advance.

    Read the article

  • How to make other semantics behave like SV_Position?

    - by object
    I'm having a lot of trouble with shadow mapping, and I believe I've found the problem. When passing vectors from the vertex shader to the pixel shader, does the hardware automatically change any of the values based on the semantic? I've compiled a barebones pair of shaders which should illustrate the problem. Vertex shader : struct Vertex { float3 position : POSITION; }; struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; cbuffer Matrices { matrix projection; }; Pixel RenderVertexShader(Vertex input) { Pixel output; output.position = mul(float4(input.position, 1.0f), projection); output.light_position = output.position; // We simply pass the same vector in screenspace through different semantics. return output; } And a simple pixel shader to go along with it: struct Pixel { float4 position : SV_Position; float4 light_position : POSITION; }; float4 RenderPixelShader(Pixel input) : SV_Target { // At this point, (input.position.z / input.position.w) is a normal depth value. // However, (input.light_position.z / input.light_position.w) is 0.999f or similar. // If the primitive is touching the near plane, it very quickly goes to 0. return (0.0f).rrrr; } How is it possible to make the hardware treat light_position in the same way which position is being treated between the vertex and pixel shaders? EDIT: Aha! (input.position.z) without dividing by W is the same as (input.light_position.z / input.light_position.w). Not sure why this is.

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >