Search Results

Search found 10025 results on 401 pages for 'state restoration'.

Page 229/401 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Atheros wireless not working

    - by Chandru1
    I have been struggling hard since i have installed Ubuntu 10.10 but it has been difficult for me to get my wifi working. So here is what i tried. First i checked whether i have the driver using the ifconfig command and it shows the wireless lan driver as wlan0. Next, i tried the command iwlist wlan0 scanning by becoming the root which gave me the output as no scan results. Next, i visited this link https://help.ubuntu.com/community/WifiDocs/Driver/Atheros to see as to what problem my laptop may have. I do own have an ath5k chipset. And as i followed the instructions in the above link in one of the blacklist-ath_pci.conf file had this written in it. For some Atheros 5K RF MACs, the madwifi driver loads buts fails to correctly initialize the hardware, leaving it in a state from which ath5k cannot recover. To prevent this condition, stop madwifi from loading by default. Use Jockey to select one driver or the other. (Ubuntu: #315056, #323830 I am not that good at Linux but i have given it a try. I am desperate to have my wifi working and i would be glad if this community could help. ADDED: If anyone would like to know as to what drivers i am using this is the output. network description: Wireless interface product: AR2413 802.11bg NIC vendor: Atheros Communications Inc. physical id: 3 bus info: pci@0000:0a:03.0 logical name: wlan0 version: 01 serial: 00:19:7d:d3:0c:fd width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath5k driverversion=2.6.35-24-generic firmware=N/A latency=168 link=no maxlatency=28 mingnt=10 multicast=yes wireless=IEEE 802.11bg resources: irq:18 memory:d0000000-d000ffff Some more information and output as to what i have done. lsmod | grep ath ath5k 130083 0 mac80211 231541 1 ath5k ath 8153 1 ath5k cfg80211 144470 3 ath5k,mac80211,ath led_class 2633 1 ath5k

    Read the article

  • Install modern browser on Maverick?

    - by feklee
    I tried installing Chrome from the official repository, but I get: $ sudo apt-get install google-chrome-stable Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: google-chrome-stable : Depends: gconf-service but it is not installable Depends: libgconf-2-4 (>= 2.31.1) but it is not installable Depends: libgtk2.0-0 (>= 2.24.0) but 2.22.0-0ubuntu1 is to be installed Depends: libnspr4 (>= 1.8.0.10) but it is not installable Depends: libnss3 (>= 3.14.3) but it is not installable Depends: libstdc++6 (>= 4.6) but 4.5.1-7ubuntu2 is to be installed Depends: libx11-6 (>= 2:1.4.99.1) but 2:1.3.3-3ubuntu1 is to be installed E: Broken packages Note: This is neither my system, nor do I want to do a full system upgrade. Any modern browser will do. Flash plugin is also needed, if not included in the browser.

    Read the article

  • The Importance of Collaboration, Analytics, and Mobile Technologies for Modern HR

    - by HCM-Oracle
    It was 17 years ago, when a McKinsey study uncovered the “war for talent”. Today, it is no point of contention that a strong talent-centric strategy maybe the most important focus for organizations. A talent-centric organization aims at recruiting, retaining and developing the best talent.  The best employees will be able to adapt responsibilities and be able to come up with solutions to solve problems, which are important skills in today’s dynamic work environment, and arguably more important in this recessionary climate.   The notion of hiring and retaining talented employees for organizational sustainability and competitive advantage is not a new concept. But can organizations consider themselves as having a “talent-centric” strategy without up-to-date collaboration tools, HR analytics and mobile technologies in pursuit of attracting, hiring and retaining the best talent? Attend the Upcoming Webcast A webcast on June 19th at 3pm EST will reveal more results of the study. Based on original research done in collaboration between Oracle HCM and HCI, we unveil new findings that explore how critical collaboration, analytic insights and mobile technology are for supporting a talent-centric work environment. You will learn: What are the benefits to being talent-centric? How does collaboration via social networks, analytics with predictive insights and mobile technologies support the talent-centric strategy of an organization? What is the state of play for these technologies? Register Here 

    Read the article

  • How Social Can You Get?

    - by Oracle OpenWorld Blog Team
    By Karen Shamban Get Social at Oracle OpenWorld What: Social Plaza @ Oracle OpenWorldWhen: Tuesday, October 2, Noon–8:00 p.m.Where: Mint Plaza, Fifth Street between Mission and Market Hashtag: #oowJoin Oracle’s social media masters—plus hundreds of your soon-to-be-closest friends—at this all-social social. There will be two bars open from 4:30 p.m. until closing, with drink coupons to go 'round. Also on hand will be one of those famous San Francisco gourmet food trucks—check them out also starting at 4:30 p.m. While you’re there, mug at the Social PhotoBooth—maybe while you're drinking a mug. Social Plaza is going to showcase artists at work—face-to-face. Indie music faves Golden State will be stopping by to play, dance-rageous DJ Brandon Arnovick will be spinning discs, and artist Melanie Alves will be painting the scene—and scenes—right there.  Into fashion? Check out the 20 local fashion designers who will be displaying, discussing, and selling their unique designs. And if you want to add to your t-shirt collection, we’ve got a live print screening planned ... the first 300 socializers will get a t-shirt free. But hey—be sure to mix some technology with your social efforts. Watch Larry Ellison’s keynote live from Social Plaza, drink in hand.   See you there. &amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • After force-installing a 32-bit deb failed, how can I install the 64-bit version?

    - by TryTryAgain
    I tried to dpkg -i --force-architecture google-earth-stable_i386.deb and it failed. But now when I try to install the amd64.deb it fails saying dpkg: error processing google-earth-stable_current_amd64.deb (--install): google-earth-stable: 6.2.2.6613-r0 (Multi-Arch: no) is not co-installable with google-earth-stable:i386 6.2.2.6613-r0 (Multi-Arch: no) which is currently installed Errors were encountered while processing: google-earth-stable_current_amd64.deb somehow it thinks the i386 version is installed. No google-earth files or directories even exist. sudo dpkg --configure -a outputs: dpkg: dependency problems prevent configuration of google-earth-stable:i386: google-earth-stable:i386 depends on lsb-core (= 3.2). dpkg: error processing google-earth-stable:i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: google-earth-stable:i386 so it does exist in some capacity. sudo apt-get -f install does nothing out of the ordinary: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. The weird thing is that synaptic doesn't show any google earth package available let alone installed, nothing under the broken filter either. I have also tried sudo apt-get autoremove and sudo apt-get autoclean So, my question: How can I get rid of this issue?

    Read the article

  • Project Corndog: Viva el caliente perro!

    - by Matt Christian
    During one of my last semesters in college we were required to take a class call Computer Graphics which tried (quite unsuccessfully) to teach us a combination of mathematics, OpenGL, and 3D rendering techniques.  The class itself was horrible, but one little gem of an idea came out of it.  See, the final project in the class was to team up and create some kind of demo or game using techniques we learned in class.  My friend Paul and I teamed up and developed a top down shooter that, given the stringent timeline, was much less of a game and much more of 3D objects floating around a screen. The idea itself however I found clever and unique and decided it was time to spend some time developing a proper version of our idea.  Project Corndog as it is tentatively named, pits you as a freshly fried corndog who broke free from the shackles of fair food slavery in a quest to escape the state fair you were born in.  Obviously it's quite a serious game with undertones of racial prejudice, immoral practices, and cheap food sold at high prices. The game itself is a top down shooter in the style of 1942 (NES).  As a delicious corndog you will have to fight through numerous enemies including hungry babies, carnies, and the corndog serial-killer himself the corndog eating champion!  Other more engaging and frighteningly realistic enemies await as the only thing between you and freedom. Project Corndog is being developed in Visual Studio 2008 with XNA Game Studio 3.1.  It is currently being hosted on Google code and will be made available as an open source engine in the coming months.

    Read the article

  • What would you do to improve the working of a small Development team?

    - by Omar Kooheji
    My company is having a reshuffle and I'm applying for my boss' job as he's moved up the ladder. The new role would give me a chance to move our development team into the 21st century and I'd like to make sure that: I can provide sensible suggestions in the interview to get the job so I can fix the team If I get the job I can actually enact some changes to actually improve the lives of the developers and their output. I want to know what I can suggest to improve the way we work, because I think it's a mess but every time I've suggested a change it's been shot down because any time spend implementing the change would be time that isn't spent developing software. Here is the state of play at the moment: My team consists of 3-4 developers (Mainly Java but I do some .Net work) Each member of the team is usually works on 2-3 projects at a time We are each responsible for the entire life cycle of the project from design to testing. Usually only one person works on a project (Although we have the odd project that will have more than one person working on it.) Projects tend to be bespoke to single customer, or are really heavilly reliant on a particular customer environment. We have 2-3 "Products" which we evolve to meet customer requirements. We use SVN for source control We don't do continuous integration (I'd like to start) We use a really basic bug tracker for internal issue tracking (I'd like to move to an issue/task management system) Any changes that bring a sudden dip in revenue generation will probably be rejected, the company isn't structured for development most of the rest of the technical team's jobs can be broken down to install this piece of hardware, configure that piece of hardware and once a job is done it's done and you never have to look at it again. This mentality has crept into development team because it's part of the company culture.

    Read the article

  • Vehicle: Boat accelerating and turning in Unity

    - by Emilios S.
    I'm trying to make a player-controllable boat in Unity and I'm running into problems with my code. 1) I want to make the boat to accelerate and decelerate steadily instead of simply moving the speed I'm telling it to right away. 2) I want to make the player unable to steer the boat unless it is moving. 3) If possible, I want to simulate the vertical floating of a boat during its movement (it going up and down) My current code (C#) is this: using UnityEngine; using System.Collections; public class VehicleScript : MonoBehaviour { public float speed=10; public float rotationspeed=50; // Use this for initialization // Update is called once per frame void Update () { // Forward movement if(Input.GetKey(KeyCode.I)) speed = transform.Translate (Vector3.left*speed*Time.deltaTime); // Backward movement if(Input.GetKey(KeyCode.K)) transform.Translate (Vector3.right*speed*Time.deltaTime); // Left movement if(Input.GetKey(KeyCode.J)) transform.Rotate (Vector3.down*rotationspeed*Time.deltaTime); // Right movement if(Input.GetKey(KeyCode.L)) transform.Rotate (Vector3.up*rotationspeed*Time.deltaTime); } } In the current state of my code, when I press the specified keys, the boat simply moves 10 units/sec instantly, and also stops instantly. I'm not really sure how to make the things stated above, so any help would be appreciated. Just to clarify, I don't necessarily need the full code to implement those features, I just want to know what functions to use in order to achieve the desired effects. Thank you very much.

    Read the article

  • BPM 11gR1 now available on Amazon EC2

    - by Prasen Palvankar
    BPM 11gR1 now available on Amazon EC2The new Oracle BPM 11gR1, including the latest Oracle SOA Suite 11gR1 Patchset-2 is now available as an Amazon Machine Image (AMI). This is a fully configured image which requires absolutely no installation and lets you get hands on experience with the software within minutes. This image has all the required software installed and configured and includes the following:Oracle 11g Database Standard Edition Oracle SOA Suite 11gR1 Patch-set 2Oracle BPM 11gR1Oracle Webcenter with BPM Process SpacesOracle Universal Content ManagementOracle JDeveloper with SOA and BPM pluginsNote: Use of this AMI requires acceptance of Oracle Technology Network (OTN) terms of use.To use this AMI, follow these steps: Login to your Amazon account and browse to Amazon AWS Console. If this is the first time you are using Amazon Web Services please visit https://aws.amazon.com/ec2/ for information on Amazon Elastic Cloud Computing and how to get started with Amazon EC2Make sure your security group that you will be using to launch the instance allows the following ports to be opened:22 (SSH)1521, 7001, 8001, 8888, 9001Click on AMIsChange the Viewing filters to 64-bit and enter soa-bpm in the search box. You should see the following AMI:083342568607/oracle-soa-bpm-11gr1-ps2-4.1-pubSelect the AMI and click on Launch or Spot Request. For more information on spot requests, please visit the Amazon EC2 link aboveAccept all the defaults and launch the instanceWhen the instance state changes to running, copy the assigned public host name and connect to it using either PuTTY or SSH command. For PuTTY usage, refer to this document.Once you are connected to the instance using PuTTY or SSH, you will be presented with the terms of use.Accept the terms of use to proceed. This will prompt you to set passwords for your oracle OS login as well as for VNC. Note that the instance will not be usable until you have accepted the terms of use.The instance is now ready to use. The SOA/BPM and other servers are automatically started once you accept the term of use. Initial startups can take about 5-10 minutes.If you would like to use the JDeveloper installed in the AMI, you can access it either using VNC or NX. You can get the NX client from NoMachine./home/oracle/README.txt contains all the URLs that you can use to access the Enterprise Manager, BPM Composer, BPM Workspace, Webcenter etc.

    Read the article

  • Sending Emails via Google SMTP - after some time quit working

    - by Chris
    on a website I use PHPMailer to send automated registration emails, etc and also a newsletter-tool (which loops through the emails and sends them one by one). Also, I configured in Gmail under Settings and confirmed @mydomain addresses, so I can send from @mydomain emails without the gmail address being displayed. Furthermore I authorized the website to send mails with this link: https://accounts.google.com/DisplayUnlockCaptcha Now, after 2 month where everything worked perfectly fine, suddenly users started not to receive emails anymore and most recently emails are not even being sent anymore. Also, I received many error messages like this: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.4.1 [email protected]: Recipient address rejected: Access Denied (state 13). When I check at this link: https://toolbox.googleapps.com/apps/checkmx/ It tells 2 none critical errors: Relayhost configuration detected. There SHOULD be a valid SPF record. So, the questions I would have were: does anybody have any hint why it stopped working, what the error messages mean? what to do to fix it? where do I set a SPF record (Cpanel?)? what is a relayhost and how to fix that? It is about 1000-1400 mails a day (gmail's limit is 2000). Also, what can I do wrong when setting up an SPF record? I've heard there are some testing tools for that. Thank you so much already in advance for your help!

    Read the article

  • Boot failure after update and system crash 11.10

    - by Alubuntu
    I'm using: Dell XPS M1330 laptop, Ubuntu 11.10 32bits, single boot (only Linux OS), Virtualbox (with Windows XP on virtual machine). System crashed while working with a very heavy image on GIMP (and Virtualbox was on). During the same session, I made an automatic system update, but don't know what was exactly updated. After the crash the system doesn't boot. Always freezes on terminal screen but at different stages, ""Starting CUPS printing spooler/server", "Checking battery state", "mountall: plymouth command failed", etc. Sometimes indicates [failed] in some of the processes and sometimes they're all [ok]. Did a Boot info summary with boot-repair and gave me this report http://paste.ubuntu.com/1050743/ This is not a new install, have been using it for almost 6 months...though since last month it couldn't halt..it started shutting down and stopped at black screen with fan on and on/off light still on (I used to finish the shutdown process by forcing halt with a long-press of the power button). I don't know if these has anything to do with the boot failure. Is there anyway I can solve this issue (the boot failure) or make some sort of system check to find out what is the cause of the problem? I wasn't sure if this was the right place to ask the question, so I also did it at https://answers.launchpad.net/ubuntu/+question/201004

    Read the article

  • Closer Aero-snap in ubuntu

    - by Andrew Redd
    Before I get screamed at for duplicating a question. Ive read windows 7-like snap window maximize and vertical feature and http://www.omgubuntu.co.uk/2009/11/get-aero-snap-in-ubuntu/ There are two problems with this solution that I am trying to get around. It's not sensitive to dragging a window It's not intelligent for Twin-view monitors. The first problem is the more pressing. I have the compiz settings with wmctrl, but this is not sensitive to dragging windows if I have a window with the focus and place my mouse in the panel I get the window maximized, even though I'm not dragging the window. A good solution would be sensitive to the state of the mouse, clicked, right clicked, middle clicked. An ideal solution would be sensitive to dragging a window or not. Second is a minor annoyance to me at least. With the commands as they are listed are equivalent to maximizing the windows since I have a Twinview monitors setup. Is there any way to add these sensitivities to the commands?

    Read the article

  • Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility

    - by Jason Fitzpatrick
    If the excitement you felt about having a DSLR capable of shooting video wore off the second you took it outside and realized you needed an expensive add-on viewfinder to use it in sunlight, this cheap DIY viewfinder is for you. The digital video capabilities of new DSLR cameras are amazing and changing the way people interact with movie production. What’s not awesome, however, is how the LCD screen gets completely washed out in bright conditions and you almost always have to buy a $50+ aftermarket accessory to make the LCD functional under those conditions. Courtesty of the Frugal Film Maker we have the following video tutorial showing us how to turn a plastic container, a cheap dollar-store magnifying glass, a headphone ear cover, and some glue and hair ties into a dirt cheap LCD viewfinder. You’ll never have to squint or miss a shot because of bright lighting conditions again–even better yet, you’ll only spend a few bucks for the whole project. For step by step instructions in print form, hit up the link below. Homemade DSLR Viewfinder [Instructables via Make] Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • Why does modx-based site start using different domains for some content?

    - by naxa
    situation I have a modx site on a VPS with multiple domain and subdomain names. The modx site should use what I call the 'primary' domain name's 'primary' subdomain, ie www.intendedname.tld . The problem is that as time pass, the site mysteriously starts using another subdomain for links to content like videos, images, and even pages and (internal) links. The other subdomains doesn't serve this content of course. If I clear the modx cache, the original state is restored. However, the problem comes back again later. The VPS has a domain registered and multiple A records pointing to the VPS's IP, as subdomains. There is the 'primary' whan which is intended to be used as the public content server, the other ones are like docs. and test., etc. On top of that, I have dynamic-dns service client installed from no-ip on the machine and a dynamic domain-name bound. It gives a completely different domain name. I originally used it for ssh login and to serve a completely different site. An nginx server is put into good use to do rewrite the different subdomains to the right places. edit The modx templates use Templates use <base href="[[++site_url]]" />. current attempt to fix The current 'solution' to the problem is to also use the rewrite to rewrite everything to the 'primary' domain and subdomain. In the nginx config file for the site, it utilizes (unsurprisingly) the rewrite directive to rewrite the unexpected server_name entries (ie. the other subdomains) in a server block dedicated to this task. So with this, the main site basically works (sort of) but this renders all the other functions (docs) useless. Before this rewrite was set, the 'solution' was to clear the modx cache on a regular basis. The original modx content is not getting corrupted, only the files in cache are. What can I do to find out what actual the problem is and fix it?

    Read the article

  • Flash removal and installation issue

    - by Theo
    I'm having this issue trying to uninstall and/or upgrade the Adobe flash player plug-in. Here's what I've ran through the terminal: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.0.0-13-generic libgladeui-1-11 linux-headers-3.0.0-19-generic linux-headers-3.0.0-13 linux-headers-3.0.0-19 erlang-base Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: adobe-flashplugin 0 upgraded, 0 newly installed, 1 to remove and 2 not upgraded. 1 not fully installed or removed. After this operation, 10.2 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 375840 files and directories currently installed.) Removing adobe-flashplugin ... update-alternatives: error: no alternatives for iceape-flashplugin. update-alternatives: error: no alternatives for iceape-flashplugin. dpkg: error processing adobe-flashplugin (--remove): subprocess installed pre-removal script returned error exit status 2 No apport report written because MaxReports is reached already postinst called with argument `abort-remove' dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: adobe-flashplugin E: Sub-process /usr/bin/dpkg returned an error code (1) Please advise if you can. Let me know if there is any other info you need.

    Read the article

  • Google earth will not reinstall

    - by chad
    I was trying to perform the the fix found at this link http://www.omgubuntu.co.uk/2012/01/how-to-make-google-earth-look-native-in-ubuntu It requires you to delete certain files from the /opt/google/earth/free folder and then add some new ones that you download. I deleted the files but the links to download the new ones were unusable. I was using gksudo nautilus so trash was disabled meaning I could not restore the files I had deleted. I the tried to go to the Google Earth website and reinstall it. I downloaded the .deb but when I tried to install it it gav me an error message saying "cannot install ia32-libs" I tried installing this via terminal and it gave me an error message saying chad@chad-Lenovo-G570:~$ sudo apt-get install ia32-libs [sudo] password for chad: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. How do I fix this? Now I am stuck without a functioning Google Earth. How can I fix this?

    Read the article

  • Keeping multiple root directories in a single partition

    - by intuited
    I'm working out a partition scheme for a new install. I'd like to keep the root filesystem fairly small and static, so that I can use LVM snapshots to do backups without having to allocate a ton of space for the snapshot. However, I'd also like to keep the number of total partitions small. Even with LVM, there's inevitably some wasted space and it's still annoying and vaguely dangerous to allocate more. So there seem to be a couple of different options: Have the partition that will contain bulky, variable files, like /srv, /var, and /home, be the root partition, and arrange for the core system state — /etc, /usr, /lib, etc. — to live in a second partition. These files can (I think) be backed up using a different backup scheme, and I don't think LVM snapshots will be necessary for them. The opposite: putting the big variable directories on the second partition, and having the essential system directories live on the root FS. Either of these options require that certain directories be pointers of some variety to subdirectories of a second partition. I'm aware of two different ways to do this: symlinks and bind-mounts. Is one better than the other for this purpose? Is there another option? Do any of the various Ubuntu installation media/strategies support this style of partition layout?

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • Approaches to timed puzzle elements

    - by ndg
    I'm working on a side scrolling game that has a number of timed puzzle elements. As a simple example: I have a number of moving platforms that have been setup to transition in a pattern. Ideally I'd like to ensure that as the player first approaches them, they are in an ideal state -- whereby the player can witness the full transition and more experienced players (i.e: speedrunners) can complete the puzzle immediately without having to wait for the current transition to complete. The issue here, in a nutshell, is that because these platforms begin transitioning at the start of the level, it's impossible to correctly calculate when the player is likely to stumble upon them. I've done a fair bit of Googling but haven't managed to turn up any decent resources with regards to solving a problem like this. The obvious solution is to only begin updating the objects when the player (or more likely: the camera) first encounters them. But this becomes difficult when you consider more complicated situations. It seems like potentially the easiest way of handling this is to have an invisible trigger volume that will tell any puzzle elements located inside of it that the player has 'arrived' upon first colliding with the player. But this would mean I'd have to logically group puzzle elements, which could become fairly messy in a hurry. Take, for instance, a puzzle that appears to the right of the screen. It may take the player a number of seconds to reach it. It would look strange if the elements involved were to remain stationary. But by the time the player arrives, it's likely things will be 'out of sync'. I wanted to post here in the hopes that others know of, or have implemented, a decent solution to this problem?

    Read the article

  • Error when upgrading initscripts using dist-upgrade on a cd image using uck

    - by InkBlend
    I was using UCK to customize an Ubuntu 11.10 image, and ran a dist-upgrade on it (from the console) to try to update all of the packages on it. The upgrade worked successfully for all but two packages, so I tried again and got the same error message. This is what happened: # sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initscripts (2.88dsf-13.10ubuntu4.1) ... guest environment detected: Linking /run/shm to /dev/shm rmdir: failed to remove `/run/shm': Device or resource busy Can't symlink /run/shm to /dev/shm; please fix manually. dpkg: error processing initscripts (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of ifupdown: ifupdown depends on initscripts (>= 2.88dsf-13.3); however: Package initscripts is not configured yet. dpkg: error processing ifupdown (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: initscripts ifupdown E: Sub-process /usr/bin/dpkg returned an error code (1) # How can I fix this? Is the problem with the ISO, dpkg, or can I just not upgrade some packages with UCK? I am using Ubuntu 11.10 to and UCK 2.4.5 to customize an Ubuntu 11.10 image.

    Read the article

  • Keep your Root Authorities up to date

    - by John Breakwell
    Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2013/06/20/keep-your-root-authorities-up-to-date.aspxBy default, Windows will automatically update it’s internal list of trusted root authorities as long as the Update Root Certificates function is installed. This should be enabled by default and takes manual intervention to remove it. With this component enabled, the following happens: If you are presented with a certificate issued by an untrusted root authority, your computer will contact the Windows Update Web site to see if Microsoft has added the CA to its list of trusted authorities. If it has been added to the Microsoft list of trusted authorities, its certificate will automatically be added to your trusted certificate store. If the component is not installed and a certificate from an untrusted CA is encountered then the following text will be seen: This is an inconvenience for the person browsing the site as they need to click to continue. Applications, though, will be unable to proceed and will throw an exception. Example: ERROR_WINHTTP_SECURE_FAILURE 12175 (0x00002F8F) One or more errors were found in the Secure Sockets Layer (SSL) certificate sent by the server. If you look at the certificate’s properties, you can see the “Issued by:” value:   This must match a Trusted Root Certificate Authority in the current user’s certificate store.   So turn on automatic updating of trusted root authority certificates. For Windows Vista and above, this option is controlled through Group Policy. See the “To Turn Off the Update Root Certificates Feature by Using Group Policy” section of the following Technet article: Certificate Support and Resulting Internet Communication in Windows Vista If Windows Update is a blocked site then download and deploy the latest pack of root certificates from Microsoft: Update for Root Certificates For Windows XP [May 2013] (KB931125)   Failing that, find a machine that has the latest root certificates installed and export them from there: Open up the Certificates console. Right-click the required Trusted Root Certificate Authority certificate Choose Export from “All Tasks” to open up the Certificate Export Wizard Choose an export file format – DER should be fine Provide a file name and complete the export. Move the file to the machine that’s missing the certificate Right-click the file and choose “Install Certificate” to open up the Certificate Import Wizard Allow the wizard to automatically select the certificate store and complete the import On a side note, for troubleshooting certificate issues it can be helpful to clear the SSL state:

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Windows Phone 7 Prototype 002: Animated Page Transitions + Writeable Bitmaps

    Motion is a key part of WP7 application development. Without motion, the WP7 UI is just a bunch of text. Not nearly as exciting. To delight users, you can add some transitions between pages.  The sample app includes some storyboards to animate between two pages. Other people have noted that you can just use the transitioning content control form the SIlverlight toolkit. Peter Torr also had a nice animating frame control in his mix demo code (his blog has some other great code samples for WP7 app dev). I took some of those concepts and the code from the TransitioningContentControl to make a new animating frame control. In this prototype, the frame takes a snapshot of the old content and the new content using writeable bitmaps and animates the snapshots and then replaces those with the actual page. The benefit is smoother animation on pages with lots of controls. Otherwise, if you have a large panorama, it might not animate that cleanly.  Like the other solutions based on the TransitioningContentControl, you can centralize all the animations in one place and not have to handle them on each individual page. Peters code also had a nice snippet for choosing the animation based on the navigation direction so you could just have a forward / backward animation and not have to do anything on each page. You could also probably add some more advanced transitions using pixel shaders or make an default no transition state if you wanted to have some specific animation on a page where individual  controls transitioned out differently like some of the WP7 shell apps. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Recommended main loop style

    - by Frootmig-H
    I've just begun attempting an FPS with JMonkeyEngine, but I'm currently stuck as to the best way to implement the main loop - especially with regards to non-instantaneous user actions. By that, I mean things like reloading a weapon. The user starts the action, and it continues for a while with an animation and some sound, and when it completes, game state updates. (I should mention that it's not technically a loop, it's an update method, called as often as possible. Is that different? Me no understand terminology). So, far I've considered : Animation driven Player presses reload Start reload animation If user stars another action, abort animation, start new action. When the animation_complete event is received (JMonkeyEngine provides this), update ammo counters. Event driven Player presses reload Start reload animation Queue up a out-of-thread method to be called at time t + (duration of reload animation) If user starts another action, cancel both animation and queued method. When queued method executes, update ammo. This avoids relying on the animation event (JMonkeyEngine has a particular quirk), but brings in the possibility of thread problems. 'Blocking' (not sure of the correct term) Player presses reload Start reloading animation reloading = true reloadedStartTime = now while (reloading && ((now - reloadingStartTime) < reloadingDuration)) { If user starts another action, break and cancel reloading. } Update ammo counters reloading = false My main concern is that actions can interrupt each other. Reloading can be interrupted by firing, or by dropping or changing weapon, crouching can be interrupted by running, etc. What's the recommended way to handle this? What are the advantages and disadvantages of each method? I'm leaning towards event-driven, even though it requires more care; failing that, blocking.

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >