Search Results

Search found 7706 results on 309 pages for 'checked'.

Page 146/309 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • Request Tracker 4.x on Ubuntu 12.04

    - by rihatum
    I have a Ubuntu 12.04 server installed on my machine. I am trying to install request-tracker4. Here's what I have done so far : a) Installed request-tracker via "sudo apt-get install request-tracker4" b) I then tried configuring RT_SiteConfig.pm in /etc/request-tracker4 but then ran into problems in populating the MySQL database. c) I then did sudo dpkg-reconfigure request-tracker4 d) It solved my problems of not being able to populate / setup mysql etc. e) Now, I am trying to setup rt under www.mydomain.com/rt I have read various how-to's and bestpractical's own guides but I am not very much a expert in Apache configurations so stuck. My Current Ubuntu 12.04 server setup: Apache2, Fastcgi installed (checked in /etc/apache2/mods-enabled Web Server document root is default /var/www/ Web user www-data Question is : 1 ) Where and What shall I put in Apache configuration to start using RT via the web-interface ? I have seen two files in /etc/request-tracker4/ apache2-fastcgi.conf and apache2-fcgid.conf I even tried making a ln -s apache2-fastcgi.conf /etc/apache2/conf.d but when I tried opening that file in root while in the conf.d directory it said too many levels. Any request tracker experts on ubuntu ?:-) Your help will be very useful and appreciated Thanks Please let me know if you need further info !

    Read the article

  • Session management error: None of the authentication protocols specified are supported

    - by JBWhitmore
    The title is the first error that has sent me on a mission to fix things. Motivation: I was trying to install the new Enthought Python Distribution -- when the error above first showed up. The install finished -- and looked like there were a few more times it flagged dcopserver problems: Please check that "dcopserver" program is running! Could not read network connection list: ~/home/user/.DCOPserver_host__0 When running ipython from the distribution, it claims that readline (the ability to up arrow in history or tab-complete) is not available for my system. It is though -- if I run the ipython that's sitting in /usr/bin/ipython all readline features work perfectly. So, I tried to fix the install by trying to fix what I thought could be causing the problems. Bad things that are happening that I want to be fixed: When restarting I get the error: Could not update ICEauthority file /home/username/.ICEauthority. ipython readline doesn't work with Enthought's ipython Things I have tried: changed the owner of my ~/.ICEauthority to be me. changed the owner of home directory (and all nested files and folders) to be me double checked that /var/lib/gdm was owned by Gnome (yep) attempted to reinstall DCOP, kbuildsycoca stuff (fail) I've removed nautilus; rebooted; reinstalled; rebooted; removed ubuntu-desktop; rebooted; reinstalled; rebooted. Any suggestions on how to fix the Bad Things that are happening would be greatly appreciated! Computer: Ubuntu 10.04 x86

    Read the article

  • Getting MTP to work with a Galaxy tab 2 7.0?

    - by Wouter
    I'm trying to get MTP with the galaxy tab 2 7.0 working on my ubuntu installation. Such that I can access the files. I tried to do what is described here: http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access I however fail at executing one of the following commands mtp-detect | grep idVendor mtp-detect | grep idProduct This fails [20:42|0] $ mtp-detect | grep idVender Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 [20:44|0] $ mtp-detect | grep idProduct Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 Now my guess was was that the idVender is the same as the VID (04e8) and the idProduct is the same as PID (6860) Now I continued to work with those values and completed the tutorial. When finished I tried android-connect This returned fuse: bad mount point `/media/GalaxyTab': Transport endpoint is not connected Does anybody have a clue what to do? Also I want to note that when I connect my GalaxyTab 2 7.0 that I still get a pop-up of ubuntu that a device was connected. I also can still see the mapstructure, the problem however is is that all the folders have 0 bytes and do not have any subfolders. I can only see the folders in the root. ps. I also checked a similar question and tried what is described in this answer http://askubuntu.com/a/88630/27480

    Read the article

  • Hybrid Graphics on Windows 7/Ubuntu 12.04 Dual Boot

    - by Noob.
    Alright, so here's the situation: I am using an ASUS UL80VT with two graphics cards: Integrated intel graphics and NVIDIA G210M I was running an Ubuntu 12.04 - Windows 7 dual boot (on separate partitions).The machine worked perfectly (including the display drivers) without me needing to install anything special or change any settings. However, my hard drive was corrupted and I lost all my data yesterday, so after it was replaced, I installed Ubuntu 12.04 64x again after installing Windows 7. I booted up Ubuntu after installation, and noticed it was by default using Unity 2D... Gnome 3.4 wasn't working properly either, so I guessed that the NVIDIA G210M driver wasn't installed/working and the OS was instead using the integrated graphics. I checked the "Additional Drivers" thing, but there were no proprietary drivers listed there, so I went to the NVIDIA website, downloaded the driver directly and installed it. I restarted, but there was no change. After this, I read somewhere that I should change my SATA in the BIOS to "Compatible" rather than "Enhanced". This worked fine and fixed the problem (both Unity and Gnome were working perfectly) but then when I tried booting up Windows 7, I recieved the BSOD. So I changed it back to Enhanced, and once again, the NVIDIA 210M graphics isn't working on Ubuntu, but on Windows 7 it is. I do not want to keep changing from Enhanced to Compatible every time I reboot to Ubuntu and neither do I want to simply just use one OS. Note that NVIDIA 210M and integrated graphics work perfectly on Windows 7. Also, I don't care about switching between them, I just want to be able to use the NVIDIA one. What can I do so that both Windows 7 and Ubuntu work and NVIDIA G210M works on Ubuntu?

    Read the article

  • Configuring Transmission for faster download

    - by Luis Alvarado
    I have tested on the same PC with the same torrent/magnet links the following Torrent Clients: Transmission Ktorrent Deluge qBittorrent Vuze After 7 days of testing I noticed that the only one that took longer to start downloading and to keep an optimum/max download speed was Transmission. It was the slowest of them all to download the same torrents or magnet links which I tested 8 torrents and 4 magnet links from different sites and the one that took the most to start downloading or start after a pause/resume event. The other 4 just took less than 2 seconds for example to start downloading and to download the same content between 50% less time to 80% less time. I think that Transmission has the same capabilities about downloading/resuming than the other torrent clients but it may be because of some configuration I need to do to get the same speed and effect than the others. In my tests all torrent clients were tested with their default configurations. No changes were made. They were tested on the same PC, with the same network connection in the same time periods. So I am thinking that Transmission just needs a little bit of configuration tunning. I also set the ports for use to the same one for each. Checked the router for any blocking and anything related to the network. What options can I change to make it so Transmission resumes a download faster (grabs the seeds faster) and keeps a fast download all the time (Stays with the seeds that offer the best connection for example). Both of which by the look of it are features that the rest of the torrent clients do already.

    Read the article

  • Microsoft , Hotmail , Live , MSN, Outlook , unable to send emails and no support received from microsoft in 3 months we are trying asking for that

    - by HugeNut
    Ok this is somenthing unbelievable, we have a website, users sign up and receives links to confirm they signed up BUT: 1 - microsoft blocked our IP (no one with microsoft email account can receive our emails) 2 - we tryed contacting microsoft submitting the detailed form about our problem 3 - we posted 3 times in their community about our problem 4 - we tweeted they about our problem 5 - we tryed finding out some telephone support number (the few there are arent' helping at all) Do you think we solved? the answer is NO :/ We still unable to send emails from our IP to microsoft email accounts, since 3 months back. Our emails are perfect we checked all the email headers following microsoft guidelines but it seems not enought, checking our IP reputation it seems everythings ok, indeed we can send email easly to any other provider , gmail, yahoo, etc Do you know any other way to try to get help ? FULL ERROR RETURNED BY MICROSOFT: host mx1.hotmail.com[65.55.37.120] said: 550 SC-001 (COL0-MC4-F28) Unfortunately, messages from 94.23.***** weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command) We are running NGIX + php mailer from a Virtual Private Server (No Hosting or shared hosting)

    Read the article

  • Should my dropdown of recently used items show items I no longer have access to

    - by Dan Hibbert
    We are implementing a client for our document management system. Part of this is the checkin screen where one of the fields a user chooses is the folder where the document should be checked into. In our original system, this was represented with a combobox where a user could hand type a folder path or select a path from a list of 5 folders they'd recently used for checking. It is possible that between the time they used the folder and the time they are doing the new checkin the user will no longer have access to the folder. At present, we still show the folder as an option and then, if the user chooses that folder, display an error message when the user submits the check in. We are thinking of removing these recently used folders the user doesn't have access to (we'll make a check when the form is instantiated) because why show an option if we know it will cause a failure (and another dialog message the user has to OK). However, an opposite opinion is that if we remove those folders, the users will think the system has "forgotten" their recent choices and will lose trust in what they are using. I'd like to get some opinions on the better user experience for this problem.

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Cannot install openjdk on Hardy Heron

    - by infaustus
    I know that Hardy Heron is very old but don't ask why Hardy... I've tried root@vz10931:/etc/apt# apt-get install openjdk-6-jre Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-6-jre: Depends: libasound2 (> 1.0.14) but it is not going to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libxtst6 but it is not going to be installed Depends: openjdk-6-jre-headless (>= 6b18-1.8.3-0ubuntu1~8.04.2) but it is not going to be installed vim: Depends: vim-common (= 1:7.1-138+1ubuntu3.1) but 2:7.3.154+hg~74503f6ee649-2ubuntu3 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). My sources.list deb http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse And root@vz10931:/etc/apt# ls -l sources.list.d/ total 0 Please help. When I've tried apt-get install -f I had install new system because everything crashed. Edit: I checked that i have openjdk installed root@vz10931:/var/www/mailer# dpkg --list | grep java iU sun-java6-bin 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar iU sun-java6-jdk 6.24-1build0.8.04.1 Sun Java(TM) Development Kit (JDK) 6 iU sun-java6-jre 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar but when i am trying to start java file: java -jar program.jar error appear -bash: java: command not found

    Read the article

  • A Virtual Dilemma

    - by antony.reynolds
    Solving a Gotcha with VirtualBox Guest Additions I was just building a new virtual machine based off an existing image that didn’t have the Virtual Box Guest Additions enabled.  The guest additions allow tight integration between the guest OS and the host environment, providing seemless mouse transfer and the ability to take advantage of full video screen size.  The guest additions need to be linked with the kernel which requires the kernel-devel package to be installed.  After installing this package and then trying to add the guest additions it failed, suggesting that I might not have the kernel-devel package that I had installed.  After a little though I finally realized what had happened.  When I grabbed the kernel-devel package I hadn’t checked the version of my kernel.  The kernel-devel I downloaded didn’t match the revision of the kernel I was running!  Hence my problems.  I upgraded the kernel to the same revision as my kernel-devel package and rebooted.  I had installed dkms so I was pleased to see that my VBox Additions successfully built and the mouse and screen now worked as expected. So now you know my embarrassing story for the day :-)

    Read the article

  • Differentiating between Hard and Soft Dependencies - Fedora Yum [closed]

    - by Sujit
    I will ask this with an example - I have installed gnash-plugin on fedora 64 bit with Yum. It pulled in following packages - Installing : agg-2.5-9.fc13.x86_64 1/6 Installing : gtkglext-libs-1.2.0-10.fc12.x86_64 2/6 Installing : boost-thread-1.44.0-7.fc14.x86_64 3/6 Installing : boost-date-time-1.44.0-7.fc14.x86_64 4/6 Installing : 1:gnash-0.8.8-4.fc14.x86_64 5/6 Installing : 1:gnash-plugin-0.8.8-4.fc14.x86_64 6/6 Now, I tested the plugin and I didn't like it. I want to remove all these above packages which got installed with the plugin as I don't longer going to need them. How can I do this? I checked remove-with-plugin for yum but it pulls in all the packages which are currently depending on the packages. I understand the thought process behind showing what packages are getting affected - but I am wondering if there is any way of looking at the history with what package got installed when I installed a certain package. When gnash-plugin wasn't there firefox was running fine with but after I installation firefox is now depends on this new plugin. Has any one worked on differentiating hard-dependencies(hard means the program will break if that package is not there) and soft-dependencies ( soft means the program may not get affected fatally) ?

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS.

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • How do I install Red5 using apt-get? Getting sub-process error

    - by Dalen
    This is copy from question of some guy on other forum that never got satisfiably answered. I encountered the same error few days ago on Ubuntu 13.04 Desktop. It seems like Red5 is installed but it cannot be run for some reason. Can anyone explain what is going on here? Why should dpkg fail? I mean, this is checked repo, it should work fine. apt-get install red5-server Selecting previously deselected package red5-server. (Reading database ... 53491 files and directories currently installed.) Unpacking red5-server (from .../red5-server_0.9.1-4squeeze1_all.deb) ... Setting up red5-server (0.9.1-4squeeze1) ... Starting Flash streaming server : red5-server failed! invoke-rc.d: initscript red5-server, action "start" failed. dpkg: error processing red5-server (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: red5-server E: Sub-process /usr/bin/dpkg returned an error code (1) Logfile error.log in /usr/share/red5/log was completely empty. Other logs were not but according to them, there were no problems at all.

    Read the article

  • Should Professional Development occur on company time?

    - by jshu
    As a first-time part-time software developer at a small consulting company, I'm struggling to organise time to further my own software development knowledge - whether that's reading a book, keeping up with the popular questions on StackOverflow, researching a technology we're using in-depth, or following the front page of Hacker News. I can see results borne from my self-allocated study time, but listing and demonstrating the skills and knowledge gained through Professional Development is difficult. The company does not have any defined PD policy, and there's a lot of pressure to get something deliverable done now! when working for consultants. I've checked what my coworkers do, and they don't appear to allocate any time to self-improvement; they just work at the problems they're given, looking up specific MSDN references, code samples, and the like as they need them. I realise that PD policy is going to vary across companies of different size and culture, and a company like my own is probably a bit of an edge case. I'd love to hear views and experiences from more seasoned developers; especially those who have to make the PD policy choices in their team or company. I'd also like to learn about the more radical approaches to PD, even if they're completely out there; it's always interesting to see what other people are trying. Not quite a summary, but what I'm trying to ask: Is it common or recommended for companies to allocate PD time? Whose responsibility is it to ensure a developer's knowledge and skills are up to date? Should a part-time work schedule inspire a lower ratio of PD time : work? How can a developer show non-developer coworkers that reading blogs and books is net productive? Is reading blogs and books actually net productive? (references welcomed) Is writing blogs effective as a way of PD? (a recent theme on Hacker News) This is sort of a broad question because I don't know exactly which questions I need to ask here, so any thoughts on relevant issues I haven't addressed are very welcome.

    Read the article

  • Partial upgrade on 12.04, how to stop nagging after locking to a working NVIDIA & xorg

    - by alsk
    How to stop the upgrade manager from offering updates and upgrades that potentially would harm my working 2D and 3D graphics? Finally, I got 12.04 working as it should: with nvidia-173 drivers by downgrading xorg and locking the version: On my 32-bit system on Athlon64, with (Albatron) NVIDIA GeForce FX5700XT, locked (/pinned) to xorg 1:7.6-7ubuntu7, xserver-xorg-core 2:11.1-0obuntu10.07, nvidia-173 173.14.35-0ubuntu0.2? An annoying thing left is that every time the updates are checked, I get warning of partial updates, and ambiguous options of "partial update" and "close". Ambiguous in that sense that if I click close, I will get option to update a few packages, which has been OK, while "partial update" would like to update my kernel to 3.2, alter xorg, remove nvidia-173 etc., and update mesa etc. This is not what I call appropriate, after locking XORG and NVIDIA drivers to working ones. One may say according to package management logic it may be correct, but to me as an user it makes little sense. Last Ubuntu that worked without big mess for me was 10.10, hence I will not put 12.10 to my "production" system, until I can be sure it will not trash the system again. P.S. Is there a recommended way to keep NVIDIA GeForce FX working with 3D on Ubuntu... in future?

    Read the article

  • Strange behavior on Gnome after update on from 13.04 to 13.10

    - by WayneBrady
    I made an automatic update on my Ubuntu 13.10 (from 13.04) system today. Since this point of time, I am in really big trouble. I use a VNC server with Gnome classic, after the update my Gnome was gone. So i tried everything. Checked the xstartup file of vncserver. Right now I reached a point where I can't find the answer. The logfile says that gnome-session-fallback is missing, even directly after I installed it with apt-get (tried it serveral times, installing, uninstalling and so on). I have no chance to use it as you can see in this terminal copy: root@ip-xxx:~/.vnc# apt-get install gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: gnome-session-fallback 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2,914 B of archives. After this operation, 247 kB of additional disk space will be used. Selecting previously unselected package gnome-session-fallback. (Reading database ... 210977 files and directories currently installed.) Unpacking gnome-session-fallback (from .../gnome-session-fallback_1%3a3.6.2-0ubuntu15_all.deb) ... Setting up gnome-session-fallback (1:3.6.2-0ubuntu15) ... root@ip-xxx:~/.vnc# gnome-session-fallback The program 'gnome-session-fallback' is currently not installed. You can install it by typing: apt-get install gnome-session-fallback If you have some idea, please give me a hint... Thank you!

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • Google Webmaster tools Incorrect rel-alternate-hreflang implementation warning message

    - by Noam
    I'm getting this warning msg. in Google webmaster tools Incorrect rel-alternate-hreflang implementation In particular, there seems to be a problem with missing or incorrect bi-directional linking (when page A links with hreflang to page B, there must be a link back from B to A as well). This msg. seems pretty straight forward, but when checking their example pages, I'm not finding anything wrong. I'm using alternate for translation of main site menu, titles, etc.. In each page I have this: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> <link rel="alternate" hreflang="th" href="http://th.mydomain.com/page" /> <link rel="alternate" hreflang="es" href="http://es.mydomain.com/page" /> <link rel="alternate" hreflang="pt" href="http://pt.mydomain.com/page" /> I've double checked this exists in all the 6 pages. This is the first time I've seen this msg although I've implemented this at least 6 months ago, and implementation hasn't changed. Is there any way to check a specific set of pages for these things? Am I missing something in my implementation? We're auto-redirecting people from a location to their specific language, and give them an option to manually change this. I've also just found out about the suggestion for Vary HTTP header - is that relevant and important here?

    Read the article

  • How to open the JavaScript console in different browsers?

    - by Šime Vidas
    Updated on October 7th 2012 Chrome: Press either CTRL + SHIFT + J to open the "Console" tab of the Developer Tools. Alternative method: Press either CTRL + SHIFT + I or F12 to open the Developer Tools. Press ESC (or click on "Show console" in the bottom right corner) to slide the console up. Note: In Chrome's dev tools, there is a "Console" tab. However, a smaller "slide-up" console can be opened while any of the other tabs is active. Safari: Press CTRL + ALT + I to open the Web Inspector. See Chrome's step 2. (Chrome and Safari have pretty much identical dev tools.) Note: Step 1 only works if the "Show Develop menu in menu bar" check box in the Advanced tab of the Preferences menu is checked! IE9: Press F12 to open the developer tools. Click the "Console" tab. Firefox: Press CTRL + SHIFT + K to open the Web console. or, if Firebug is installed (recommended): Press F12 to open Firebug. Click on the "Console" tab. Opera: Press CTRL + SHIFT + I to open Dragonfly. Click on the "Console" tab.

    Read the article

  • Where or what are the instructions for installing FMOD Ex for Linux to use in g++?

    - by Andrey
    I'm looking for the instructions on how to install FMOD. I want to do extra credit for my computer graphics assignment - sound effects. A teammate wants me to go with something simple, and he suggested that I use FMOD Ex. (If you guys can think of something better, do suggest it, but so far FMOD looks more promising compared to SDL, OpenAL, etc.) Right now I'm having a really hard time finding the instructions for installing the latest version of FMOD (audio content creation tool) on Linux Ubuntu 12.04 LTS (32-bit) so that I can use it in g++ with OpenGL. I checked out this YouTube video: http://www.youtube.com/watch?v=avGxNkiAS9g, but it's for Windows. Then, there is a Ubuntu Forums thread which redirected me to this page: https://wiki.debian.org/FMOD, and it has some dated instructions. I've downloaded FMOD Ex v. 4.44.24, which I believe is the latest version. Now I'm looking at eight files: libfmodex.so; libfmodex64.so; libfmodex64-4.44.24.so; libfmodex-4.44.24.so; libfmodexL.so; libfmodexL64.so; libfmodexL64-4.44.24.so; libfmodexL-4.44.24.so ... not knowing what to do. I've looked everywhere I could think of: StackOverflow, here, YouTube, Google, ... and came up with zilch. Please help. Thanks in advance.

    Read the article

  • Unavailable packages repository

    - by bitmask
    I'm running ubuntu 11.10 (oneiric) on this machine, and suddenly, apt is unable to update properly. If I ask it to update its package information, by running apt-get update (or alternatively telling the update manager to "check"), it succeeds for about 120 packages (more precisely, I get about 120 Ign/Hit notes) and then says it cannot find universe Sources and restricted amd64: Hit http://de.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://de.archive.ubuntu.com oneiric-backports/universe Translation-en Err http://de.archive.ubuntu.com oneiric/universe Sources 404 Not Found [IP: 141.30.13.20 80] Err http://de.archive.ubuntu.com oneiric/restricted amd64 Packages 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/universe/source/Sources 404 Not Found [IP: 141.30.13.20 80] W: Failed to fetch http://de.archive.ubuntu.com/ubuntu/dists/oneiric/restricted/binary-amd64/Packages 404 Not Found [IP: 141.30.13.20 80] E: Some index files failed to download. They have been ignored, or old ones used instead. I manually checked the de server and cannot find anything wrong with the stuff it's complaining about. Also it looks pretty much like, say, the us mirror. But oddly enough, the IP it lists, seems to point to a debian package server, which obviously does not contain ubuntu packages. So, is this a local problem that I can fix somehow (and if so, how?) or is there actually some server down right now?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >