Search Results

Search found 20065 results on 803 pages for 'practice problems'.

Page 15/803 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Collision Detection Problems

    - by MrPlosion
    So I'm making a 2D tile based game but I can't quite get the collisions working properly. I've taken the code from the Platformer Sample and implemented it into my game as seen below. One problem I'm having is when I'm on the ground for some strange reason I can't move to the left. Now I'm pretty sure this problem is from the HandleCollisions() method because when I stop running it I can smoothly move around with no problems. Another problem I'm having is when I'm close to a tile the character jitters very strangely. I will try to post a video if necessary. Here is the HandleCollisions() method: Thanks. void HandleCollisions() { Rectangle bounds = BoundingRectangle; int topTile = (int)Math.Floor((float)bounds.Top / World.PixelTileSize); int bottomTile = (int)Math.Ceiling((float)bounds.Bottom / World.PixelTileSize) - 1; int leftTile = (int)Math.Floor((float)bounds.Left / World.PixelTileSize); int rightTile = (int)Math.Ceiling((float)bounds.Right / World.PixelTileSize) - 1; isOnGround = false; for(int x = leftTile; x <= rightTile; x++) { for(int y = topTile; y <= bottomTile; y++) { if(world.Map[y, x].Collidable == true) { Rectangle tileBounds = new Rectangle(x * World.PixelTileSize, y * World.PixelTileSize, World.PixelTileSize, World.PixelTileSize); Vector2 depth = RectangleExtensions.GetIntersectionDepth(bounds, tileBounds); if(depth != Vector2.Zero) { if(Math.Abs(depth.Y) < Math.Abs(depth.X)) { isOnGround = true; position = new Vector2(position.X, position.Y + depth.Y); } else { position = new Vector2(position.X + depth.X, position.Y); } bounds = BoundingRectangle; } } } }

    Read the article

  • Is using build-in sorting considered cheating in practice tests?

    - by user10326
    I am using one of the practice online judges where a practice problem is asked and one submits the answer and gets back if it is accepted or not based on test inputs. My question is the following: In one of the practice tests, I needed to sort an array as part of the solution algorithm. If it matters the problem was: find 2 numbers in an array that add up to a specific target. As part of my algorithm I sorted the array, but to do that I used Java's quicksort and not implement sorting as part of the same method. To do that I had to do: java.util.Arrays.sort(array); Since I had to use the fully qualified name I am wondering if this is a kind of "cheating". (I mean perhaps an online judge does not expect this) Is it? In a formal interview (since these tests are practice for interview as I understand) would this be acceptable?

    Read the article

  • Latest (5 June 14) Updates t0 10.04 Causing Multiple Problems

    - by user291780
    Apologies, the questions are very short, but the bkg isn't. I rec'd a routine notification from the update mgr a few days ago (I believe, June 5th). I took a look and there was lots of linux stuff, headers, etc., nothing obviously unusual. I'd rec'd and updated w/a more extensive pkg set, kernel and all a few weeks ago, no problem. On June 6, I pushed the upgrade button on the June 5th batch, nothing usual, needed a reboot, which I did after a full power down, it came up fine. Gedit worked, the calculator worked, started up firefox, it came up, selected the BookMarks menu, and blam, it hesitated and then greyed out, when I tried to close it, got the "process not responding" msg. Undaunted, I tried to fire up Google Chrome....nothing on the screen or process bar. I fired up the system monitor and indeed there were some "sleeping" chrome processes "running". Powered down several times, but the same problems persist. Similar but worse story when I tried to fire up one of my virtual machines, VirtualBox came up fine, but when I tried to start one of my virtual machines I got a progress popup that I'd never seen before which showed that we were making no progress past 20%. Uninstalled Oracle VirtualBox, reinstalled the latest and greatest, same result. Also, unable to logout, or shutdown once the virtual machine exhibited this behavior. Powered down manually, end of story. Never saw such a bad result after an update. I'm running Ubuntu 10.04 LTS (Lucid Lynx) as I have been for a number of years. Please don't reply with why don't you run some other version of Ubuntu, that doesn't answer the questions below. Questions: Will their be a subsequent update that fixes this, and if so, when? If not, is there a way for me to get back to where I was before this disaster?

    Read the article

  • Install Problems on ASUS X401A Notebook

    - by tired_of_trying
    okay... I tried many approaches to install Ubuntu 12.xxx on my new Asus notebook with varying degrees of failure... First: I'm not a newbie but I'm as frustrated as one! Install background: Install from USB DVD drive: The install went well. Re-booted machine. choose ubuntu and it errors with a MBR file error (can't remember the exact wording - something to do with missing the file. Choosing to boot W7 works fine. Install from USB Stick: Couldn't get machine to recognize the .iso Install into Oracle's Vbox: Got the boot splash screen, then hangs with a zillion errors. Note: I didn't have any problems installing ubuntu in Vbox on my iMac and it run's great. Installed using wubi: Installed fine but get errors when booting ubuntu (it doesn't find the needed wubi files). I downloaded to the C: drive and tried installing from there - no luck. For kicks: I tried running Slax Linux .iso from a USB stick and it runs fine. Some Questions: Did I use the correct .iso? (I tried 12.04.0 and 12.04.1 both 32 and 64 bit versions. I simply downloaded them from the download link and didn't use/look for an alternate version. Do I need to do something special when burning the .iso to disc? What? I did read tons of posts but, no luck with finding the solution. Any help is appreciated... thanks

    Read the article

  • Problems with ATI propietary driver in Ubuntu 12.10, window trail and screen does not update

    - by crystallero
    I have problems with the Ati propietary driver (fglrx). I have an iMac (mid 2011) with a Radeon HD 6900M [1002:6720]. I did not have any problem under Ubuntu 12.04, but since I updated to 12.10, I get some annoying graphic corruption. The worst one is that sometimes the screen does not update with the new information. It happens a lot when I change between tabs in Chrome or Sublime Text. It usually gets updated when I scroll the page. Sometimes, when I type, I have to wait a little bit to view the new characters. And I get trails when I move windows too (like a part of the window). After a while, the trail disappears. I tried to install fglrx, fglrx-updates and the new beta driver downloaded from Ati (12.11 Beta 11/16/2012), with no luck. It happens the same with all of them. I tried to mess with Compiz config, but it didn't fix anything. The open source driver does not suffer this problem, but I need the performance of the propietary driver . Do you have clue? Thanks.

    Read the article

  • Ext JS 4.2.1 loading controller - best practice

    - by Hown_
    I am currently developing a Ext JS application with many views/controlers/... I am wondering myself what the best practice is for loading the JS controllers/views/and so on... currently i have my application defined like this: // enable javascript cache for debugging, otherwise Chrome breakpoints are lost Ext.Loader.setConfig({ disableCaching: false }); Ext.require('Ext.util.History'); Ext.require('app.Sitemap'); Ext.require('app.Error'); Ext.define('app.Application', { name: 'app', extend: 'Ext.app.Application', views: [ // TODO: add views here 'app.view.Viewport', 'app.view.BaseMain', 'app.view.Main', 'app.view.ApplicationHeader', //administration 'app.view.administration.User' ... ], controllers: [ 'app.controller.Viewport', 'app.controller.Main', 'app.controller.ApplicationHeader', //administration 'app.controller.administration.User', ... ], stores: [ // stores in there.. ] }); somehow this forces the client to load all my views and controllers at startup and is calling all init methods of all controllers of course.. i need to load data everytime i chnage my view.. and now i cant load it in my controllers init function. I would have to do something like this i assume: init: function () { this.control({ '#administration_User': { afterrender: this.onAfterRender } }); }, Is there a better way to do this? Or just an other event? Though the main thing i am questioning myself is if it is the best practice to load all the javascript at startup. Wouldnt it be better to only load the controllers/views/... which the client does need right now? Or should i load all the JS at startup? If i do want to load the controllers dynamicly how could i do this? I assume a would have to remove them from my application arrays (views, controllers, stores) and create an instance if i do need it and mby set the view in the controllers init?! What's best practice??

    Read the article

  • Drive Genius says: "Invalid catalog btree reverse link in node" is this fixable?

    - by bencnscp
    No obvious problems with my Mac OS X 10.5 system, but Drive Genius 1.5.1 says: "Invalid catalog btree reverse link in node" when doing a "Verify" The "repair" and "rebuild" options fail. I did some googling, and the consensus as: 1- Drive Genius is slightly better than Disk Warrior 2- This problem is likely not fixable. My assumed solution is that I should do a full backup and re-format.

    Read the article

  • Why is /dev/rfcomm0 giving PySerial problems?

    - by Travis G.
    I am connecting my Ubuntu box to a wireless readout setup over Bluetooth. I wrote a Python script to send the serial information through /dev/rfcomm0. The script connects fine and works for a few minutes, but then Python will start using 100% CPU and the messages stop flowing through. I can open rfcomm0 in a serial terminal and communicate through it by hand just fine. When I open it through a terminal it seems to work indefinitely. Also, I can swap the Bluetooth receiver for a USB cable, and change the port to /dev/ttyUSB0, and I don't get any problems over time. It seems either I'm doing something wrong with rfcomm0 or PySerial doesn't handle it well. Here's the script: import psutil import serial import string import time sampleTime = 1 numSamples = 5 lastTemp = 0 TEMP_CHAR = 't' USAGE_CHAR = 'u' SENSOR_NAME = 'TC0D' gauges = serial.Serial() gauges.port = '/dev/rfcomm0' gauges.baudrate = 9600 gauges.parity = 'N' gauges.writeTimeout = 0 gauges.open() print("Connected to " + gauges.portstr) filename = '/sys/bus/platform/devices/applesmc.768/temp2_input' def parseSensorsOutputLinux(output): return int(round(float(output) / 1000)) while(1): usage = psutil.cpu_percent(interval=sampleTime) gauges.write(USAGE_CHAR) gauges.write(chr(int(usage))) #write the first byte #print("Wrote usage: " + str(int(usage))) sensorFile = open(filename) temp = parseSensorsOutputLinux(sensorFile.read()) gauges.write(TEMP_CHAR) gauges.write(chr(temp)) #print("Wrote temp: " + str(temp)) Any thoughts? Thanks. EDIT: Here is the revised code, using Python-BlueZ instead of PySerial: import psutil import serial import string import time import bluetooth sampleTime = 1 numSamples = 5 lastTemp = 0 TEMP_CHAR = 't' USAGE_CHAR = 'u' SENSOR_NAME = 'TC0D' #gauges = serial.Serial() #gauges.port = '/dev/rfcomm0' #gauges.baudrate = 9600 #gauges.parity = 'N' #gauges.writeTimeout = 0 #gauges.open() gaugeSocket = bluetooth.BluetoothSocket(bluetooth.RFCOMM) gaugeSocket.connect(('00:06:66:42:22:96', 1)) filename = '/sys/bus/platform/devices/applesmc.768/temp2_input' def parseSensorsOutputLinux(output): return int(round(float(output) / 1000)) while(1): usage = psutil.cpu_percent(interval=sampleTime) #gauges.write(USAGE_CHAR) gaugeSocket.send(USAGE_CHAR) #gauges.write(chr(int(usage))) #write the first byte gaugeSocket.send(chr(int(usage))) #print("Wrote usage: " + str(int(usage))) sensorFile = open(filename) temp = parseSensorsOutputLinux(sensorFile.read()) #gauges.write(TEMP_CHAR) gaugeSocket.send(TEMP_CHAR) #gauges.write(chr(temp)) gaugeSocket.send(chr(temp)) #print("Wrote temp: " + str(temp)) It seems either Ubuntu must be closing /dev/rfcomm0 after a certain time or my Bluetooth receiver is messing things up. Even when the BluetoothError arises, the "connected" light on the receiver stays illuminated, and it is not until I power-cycle to receiver that I can reconnect. I'm not sure how to approach this problem. It's odd that the connection would work fine for a few minutes (seemingly a random amount of time) and then seize up. In case it helps, the Bluetooth receiver is a BlueSmirf Silver from Sparkfun. Do I need to be trying to maintain the connection from the receiver end or something?

    Read the article

  • Lenovo Thinkpad L430 overheating due to fan problems

    - by Dirk B.
    This is the same question as Fan not working on thinkpad L430, laptop overheating, but that question has been marked as a duplicate, which it is not, and I cannot reopen it. I'm having problems controlling the fan of my Lenovo Thinkpad L430. The fan doesn't start. Without any fan control installed the fan just doesn't run. If I run stress, it does run a little, but it's nowhere near the speed it should be. After a while, the laptop just overheats and stops. I Tried to install tp-fancontrol, and enabled thinkpad_acpi fancontrol=1, but to no avail. If I try to set the fan speed manually, it doesn't start up. In windows, there's a program called TPFanControl. It turns out that this laptop uses a different scheme to control the fan than other thinkpads. The level runs from 0 to 255, and max = 0 and min=255. Now I'm looking for a fan control program that works for linux. Does anyone know if it actually exists? Anyone with any experience on fan control on a L430? Update: sudo pwmconfig gives the following output: # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. Found the following devices: hwmon0 is acpitz hwmon1/device is coretemp hwmon2/device is thinkpad Found the following PWM controls: hwmon2/device/pwm1 hwmon2/device/pwm1 is currently setup for automatic speed control. In general, automatic mode is preferred over manual mode, as it is more efficient and it reacts faster. Are you sure that you want to setup this output for manual control? (n) y Giving the fans some time to reach full speed... Found the following fan sensors: hwmon2/device/fan1_input current speed: 0 ... skipping! There are no working fan sensors, all readings are 0. Make sure you have a 3-wire fan connected. You may also need to increase the fan divisors. See doc/fan-divisors for more information. update: If you need it, lspci is available here

    Read the article

  • Bootloader Problems Grub Won't Load Windows 7

    - by user108805
    I sent this to [email protected], still no response thought I could get a faster solution here. I am running Windows 7 64-bit and Ubuntu 12.04 LTS on separate partitions. The message is sent is: Boot-Repair URL: http://paste.ubuntu.com/1365163/ Originally I was unable to access Ubuntu after a windows update (Ubuntu was installed using wubi). Rather than logging into Ubuntu from the Windows 7 Bootloader, it lead to the grub command prompt. No matter what I did here, it would not log me into linux. As a result I uninstalled Ubuntu from the Add/Remove Programs application in Windows 7. I then re-installed Ubuntu 12.04 LTS using a liveCD-USB. This time however, I created a partition. I then restarted and got the GRUB bootloader which loads Ubuntu 12.04 LTS with no problems, however when I select windows (listed as "Windows 7 (loader)"), it just refreshes the grub bootloader instead of loading Windows 7. I then used the Windows 7 repair disk to run bootrec/fixmbr and bootrec/fixboot. This led to no bootloader coming up when I started my computer. Instead I got a blank black screen with a flashing white cursor. I went on to do a bootrec/buildbcd and bootrec/scanos. These did nothing to change the situation. When I ran bootrec/scanos it said that no Windows 7 installations were present. After this I decided to reinstall WIndows 7 only for this to do nothing to change the situation. Afterwards I did a boot-repair in which I began to get the GRUB bootloader, which would load ubuntu 12.04 LTS, but still would not load Windows 7. I also did a sudo update-grub which recognized Windows 7 as being installed, but still didn't fix the issue of loading Windows 7. While running Ubuntu I have no problem accessing my WIndows 7 partition which is formatted as NTFS. It shows all the files and folders reflecting that the re-install did take place, and it also shows all of my old applications and folders in the Windows.old folder. I am completely stuck at this point and have no clue what I should do next. Any help you can offer me will be greatly appreciate. Thank You --gap

    Read the article

  • Problems uploading package to launchpad

    - by user74513
    I'm having a lot of problems uploading my showdown project to a PPA. I've setup correctly PGP keys and my public ssh key to launchpad. I've packaged with debuild my C++ project, producing a source package lintian gave me only those two warnings that I think are ok for the showdown rules: W: massren source: native-package-with-dash-version W: massren source: binary-nmu-debian-revision-in-source 1.0-0extras12.04.1~ppa2 Producing a binary package works to and the package installs without problem on my ubuntu 12.04 machine, I only have a few more lintian warnings about the fact I'm installing in /opt/extras.ubuntu.com/ I'm uploading with: dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes When I upload with dput I have no errors, signatures seems ok, and public key seems accepted to (since the upload goes on without asking passwords...): dput ppa:gabrielegreco/massren massren_1.0-0extras12.04.1~ppa2_source.changes Checking signature on .changes gpg: Signature made Mon 02 Jul 2012 10:00:38 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2_source.changes. Checking signature on .dsc gpg: Signature made Mon 02 Jul 2012 10:00:33 AM CEST using RSA key ID 49982576 gpg: Good signature from "Gabriele Greco " Good signature on /home/gabry/no-backup/massren_1.0-0extras12.04.1~ppa2.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading massren_1.0-0extras12.04.1~ppa2.dsc: done. Uploading massren_1.0-0extras12.04.1~ppa2.tar.gz: done. Uploading massren_1.0-0extras12.04.1~ppa2_source.changes: done. Successfully uploaded packages. At the moment I'm not receiving responses from launchpad site, but the upload does not show in the ppa page. Previous attempts gave me response e-mails with different kind of errors: File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 File massren_1.0-0extras12.04.1~ppa1.tar.gz mentioned in the changes has a checksum mismatch. 1503fa155226cbc4aba2f8ba9aa11a75 != 294a5e0caf3fe95b0b007a10766e9672 Or more cryptic: GPG verification of /srv/launchpad.net/ppa-queue/incoming/upload-ftp-20120629-163320-001135/~gabrielegreco/massren/ubuntu/massren_1.0-0extras12.04.1~ppa1.dsc failed: Verification failed 3 times: ["(7, 58, u'No data')", "(7, 58, u'No data')", "(7, 58, u'No data')"] Further error processing not possible because of a critical previous error. Any idea how can I solve this problem? I'm new to ubuntu packaging, so I may miss some step... There is an alternative to dput (aka manual upload)?

    Read the article

  • Multiple Problems Installing 12.04, now can't use Windows

    - by user87997
    First I tried using the 32-bit wubi.exe installer from the main Ubuntu website. It worked fine, dual booted with Windows 7 and all. I tried installing several applications and got errors. After searching for a little while for a fix, I found that someone else had solved the problem by uninstalling the 32-bit version and installing 64-bit Ubuntu. Apparently there is no wubi.exe installer for the 64-bit version, so I used LinuxLive to put the iso file onto a USB drive. I changed my the boot order in BIOS to check the USB first. It did, and I got into the Ubuntu installer just fine. Everything was working fine, but then I got an error that GRUB could not be installed. I chose "install manually later" or something like that. Immediately, the installer said it was done and ready for a reboot. At this point, my USB is still in the computer. The computer reboots...and it's back at the installer for the USB. I look up what's going on here, and someone says in a thread they solved it by selecting "Try Ubuntu" then installing it via a shortcut on the desktop. I assumed that Ubuntu simply hadn't installed and it would be safe to try again, so I did. It finished installing, this time I chose a different partition that wasn't being used. The thread also said to reinstall grub to the mounted drive, so I did that. Next I took out my USB and rebooted. I get stuck on the GRUB GNU loader, v.1.99 or something I believe it says at the top. I can't do anything, and it doesn't detect Windows 7 OR Ubuntu. When I check partitions, I have two 43 GB partitions that both have the same files in them (I'm assuming those are the two Ubuntu installations), and can only run Ubuntu off of my USB-- and can't run Windows 7 at all, however from within Ubuntu the windows 7 filesystem and files can still be seen. I have no idea what to do now. I used Ubuntu in the past (9.xx) and never had these sorts of problems! Please help. And sorry for the wall of text.

    Read the article

  • Wifi problems with Ubuntu 13.10

    - by Sparrowhawk4
    I should probably say straight off that I am brand new to Ubuntu. I installed 13.10 alongside Windows 7 a couple of days ago and I've managed to clear up all the little problems, apart from with the wifi. My internet is fine using an ethernet cable. I can connect to the wifi, and it tells me that the signal is good, but for the vast majority of the time I can't load anything, and system monitor tells me that I am sending and receiving nothing. The problem is somewhere in Ubuntu. It works fine on Windows 7, my Android phone and all of my flatmates laptops. I know I'm not the only one with this problem, but the fixes I've seen either don't work or I don't understand them enough to carry them out. Help? EDIT: from lspci command as requested 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 00:1f.6 Signal processing controller: Intel Corporation 82801I (ICH9 Family) Thermal Subsystem (rev 03) 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8191SEvA Wireless LAN Controller (rev 10) 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02)

    Read the article

  • Debian dependency problems / partially installed

    - by Michael
    I tried to install curl support for php 5 on my debian squeeze machine and since I'm having problems. After trying to install curl I got dependency issues which I tried to solve by removing what started the issues. From one thing came another and I'm currently looking at ~29 issues when I try to do an apt-get upgrade. These issues vary from unable to config, dependency and unable to remove errors. I tried apt-get upgrade -f and installing packages using dpkg command. I tried removing using purge and force. I manually removed stuff to try and fix it. I tried running dpkg --configure -a. I've to say I'm still pretty new to linux so I'm out of idea's and cant seem to find an answer online that matches my problems. Here's a part of the apt-get upgrade command output: Reading package lists... Building dependency tree... Reading state information... 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 29 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up libgeoip1 (1.4.7~beta6+dfsg-1) ... Bus error dpkg: error processing libgeoip1 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libisc62 (1:9.7.3.dfsg-1~squeeze3) ... Bus error dpkg: error processing libisc62 (--configure): subprocess installed post-installation script returned error exit status 135 dpkg: dependency problems prevent configuration of libdns69: libdns69 depends on libgeoip1 (>= 1.4.7~beta6+dfsg); however: Package libgeoip1 is not configured yet. libdns69 depends on libisc62; however: Package libisc62 is not configured yet. dpkg: error processing libdns69 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libisccc60: libisccc60 depends on libisc62; however: Package libisc62 is not configured yet. dpkg: error processing libisccc60 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libisccfg62: libisccfg62 depends on libdns69; however: Package libdns69 is not configured yet. .. continues Errors were encountered while processing: libgeoip1 libisc62 libdns69 libisccc60 libisccfg62 libbind9-60 liblwres60 bind9-host libavahi-core7 libdaemon0 avahi-daemon libexif12 libffi5 libgomp1 libgphoto2-port0 libgphoto2-2 libperl5.10 libsensors4 libsnmp15 libhpmud0 libieee1284-3 libnss-mdns libossp-uuid16 libpq5 libv4l-0 libsane libsane-hpaio libssh2-1 python-gobject dpkg --configure -a Setting up libpq5 (8.4.8-0squeeze2) ... Bus error dpkg: error processing libpq5 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libperl5.10 (5.10.1-17squeeze2) ... Bus error dpkg: error processing libperl5.10 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libffi5 (3.0.9-3) ... Bus error dpkg: error processing libffi5 (--configure): subprocess installed post-installation script returned error exit status 135 Setting up libexif12 (0.6.19-1) ... .. continues Suggestions are really welcome I really don't know what to do. Michael.

    Read the article

  • What are developer's problems with helpful error messages?

    - by Moo-Juice
    It continue to astounds me that, in this day and age, products that have years of use under their belt, built by teams of professionals, still to this day - fail to provide helpful error messages to the user. In some cases, the addition of just a little piece of extra information could save a user hours of trouble. A program that generates an error, generated it for a reason. It has everything at its disposal to inform the user as much as it can, why something failed. And yet it seems that providing information to aid the user is a low-priority. I think this is a huge failing. One example is from SQL Server. When you try and restore a database that is in use, it quite rightly won't let you. SQL Server knows what processes and applications are accessing it. Why can't it include information about the process(es) that are using the database? I know not everyone passes an Applicatio_Name attribute on their connection string, but even a hint about the machine in question could be helpful. Another candidate, also SQL Server (and mySQL) is the lovely string or binary data would be truncated error message and equivalents. A lot of the time, a simple perusal of the SQL statement that was generated and the table shows which column is the culprit. This isn't always the case, and if the database engine picked up on the error, why can't it save us that time and just tells us which damned column it was? On this example, you could argue that there may be a performance hit to checking it and that this would impede the writer. Fine, I'll buy that. How about, once the database engine knows there is an error, it does a quick comparison after-the-fact, between values that were going to be stored, versus the column lengths. Then display that to the user. ASP.NET's horrid Table Adapters are also guilty. Queries can be executed and one can be given an error message saying that a constraint somewhere is being violated. Thanks for that. Time to compare my data model against the database, because the developers are too lazy to provide even a row number, or example data. (For the record, I'd never use this data-access method by choice, it's just a project I have inherited!). Whenever I throw an exception from my C# or C++ code, I provide everything I have at hand to the user. The decision has been made to throw it, so the more information I can give, the better. Why did my function throw an exception? What was passed in, and what was expected? It takes me just a little longer to put something meaningful in the body of an exception message. Hell, it does nothing but help me whilst I develop, because I know my code throws things that are meaningful. One could argue that complicated exception messages should not be displayed to the user. Whilst I disagree with that, it is an argument that can easily be appeased by having a different level of verbosity depending on your build. Even then, the users of ASP.NET and SQL Server are not your typical users, and would prefer something full of verbosity and yummy information because they can track down their problems faster. Why to developers think it is okay, in this day and age, to provide the bare minimum amount of information when an error occurs? It's 2011 guys, come on.

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • Problems with texture orientation in space

    - by frankie
    I am currently drawing texture in 3D space and have some problems with it's orientation. I'd like me textures always to be oriented with front face to user. My desirable result looks like Note, that text size stay without changes when we rotating world and stay oriented with front face to user. Now I can draw text in 3D space, but it is not oriented with front but rotating with world. Such results I got with following shaders: Vertex Shader uniform vec3 Position; void main() { gl_Position = vec4(Position, 1.0); } Geometry Shader layout(points) in; layout(triangle_strip, max_vertices = 4) out; out vec2 fsTextureCoordinates; uniform mat4 projectionMatrix; uniform mat4 modelViewMatrix; uniform sampler2D og_texture0; uniform float og_highResolutionSnapScale; uniform vec2 u_originScale; void main() { vec2 halfSize = vec2(textureSize(og_texture0, 0)) * 0.5 * og_highResolutionSnapScale; vec4 center = gl_in[0].gl_Position; center.xy += (u_originScale * halfSize); vec4 v0 = vec4(center.xy - halfSize, center.z, 1.0); vec4 v1 = vec4(center.xy + vec2(halfSize.x, -halfSize.y), center.z, 1.0); vec4 v2 = vec4(center.xy + vec2(-halfSize.x, halfSize.y), center.z, 1.0); vec4 v3 = vec4(center.xy + halfSize, center.z, 1.0); gl_Position = projectionMatrix * modelViewMatrix * v0; fsTextureCoordinates = vec2(0.0, 0.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v1; fsTextureCoordinates = vec2(1.0, 0.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v2; fsTextureCoordinates = vec2(0.0, 1.0); EmitVertex(); gl_Position = projectionMatrix * modelViewMatrix * v3; fsTextureCoordinates = vec2(1.0, 1.0); EmitVertex(); } Fragment Shader in vec2 fsTextureCoordinates; out vec4 fragmentColor; uniform sampler2D og_texture0; uniform vec3 u_color; void main() { vec4 color = texture(og_texture0, fsTextureCoordinates); if (color.a == 0.0) { discard; } fragmentColor = vec4(color.rgb * u_color.rgb, color.a); } Any ideas how to get my desirable result? EDIT 1: I make edit in my geometry shader and got part of lable drawn on screen at corner. But it is not rotating. .......... vec4 centerProjected = projectionMatrix * modelViewMatrix * center; centerProjected /= centerProjected.w; vec4 v0 = vec4(centerProjected.xy - halfSize, 0.0, 1.0); vec4 v1 = vec4(centerProjected.xy + vec2(halfSize.x, -halfSize.y), 0.0, 1.0); vec4 v2 = vec4(centerProjected.xy + vec2(-halfSize.x, halfSize.y), 0.0, 1.0); vec4 v3 = vec4(centerProjected.xy + halfSize, 0.0, 1.0); gl_Position = og_viewportOrthographicMatrix * v0; ..........

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Helping install mrcwa and solve problems with f2py in Ubuntu 14.04 LTS

    - by user288160
    I am sorry if this is the wrong section but I am starting to get desperate, please someone help me... I need to install the program mrcwa-20080820 (sourceforge.net/projects/mrcwa/) because a summer project that I am involved. I need to use it together with anaconda (store.continuum.io/cshop/anaconda/), I already installed Anaconda and apparently it is working. When I type: conda --version I got the expected answer. conda 3.5.2 If I tried to import numpy or scipy with python or simple type f2py there are no errors. So far so good. But when I tried to install this program sudo python setup.py install I got these errors: running install running build sh: 1: f2py: not found cp: cannot stat ‘mrcwaf.so’: No such file or directory running build_py running install_lib running install_egg_info Removing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Writing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Obs: I am trying to use intel fortran 64-bits and Ubuntu 14.04 LTS. So I was checking f2py and tried to execute the program hello world f2py -c -m hello hello.f from here: cens.ioc.ee/projects/f2py2e/index.html#usage and I had some problems too: running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "hello" sources f2py options: [] f2py:> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c creating /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 Reading fortran codes... Reading file 'hello.f' (format:fix,strict) Post-processing... Block: hello Block: foo Post-processing (stage 2)... Building modules... Building module "hello"... Constructing wrapper function "foo"... foo(a) Wrote C/API module "hello" to file "/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 /hellomodule.c" adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c' to sources. adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7' to include_dirs. copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Found executable /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler using build_ext building 'hello' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmpf8P4Y3/tmp creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3 creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local/lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c:17: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c:2: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ compiling Fortran sources Fortran f77 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict Fortran f90 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FR -fPIC -xhost -openmp -fp-model strict Fortran fix compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local /lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' ifort:f77: hello.f /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -shared -shared -nofor_main /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.o /tmp/tmpf8P4Y3 /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.o /tmp/tmpf8P4Y3/hello.o -L/home/felipe /anaconda/lib -lpython2.7 -o ./hello.so Removing build directory /tmp/tmpf8P4Y3 Please help me I am new in ubuntu and python. I really need this program, my advisor is waiting an answer. Thank you very much, Felipe Oliveira.

    Read the article

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • 13.10 - Weird WiFi connection problems - WMP300N - Broadcom BCM4321

    - by user1898041
    Just installed 13.10 on my desktop and I really like it. After having problems with getting the wifi to work, I installed it connected to the internet with an ethernet cable and added in the 3rd party software and updates as per the installation procedure. After installation was completed, I saw the wifi icon in the upper right hand corner, but it was not seeing any wifi networks. Some Googling brought me to use the 'Additional Drivers' application. It found the WMP300N Broadcom BDM4321 based pci wifi card and installed the proprietary Broadcom STA wireless driver, which may have been installed before. I'm not sure. Here is the weird part: when I start my system, wifi seems to be in some sort of suspended state where the system sees that the card exists but the card will not detect any wifi networks. It will work after booting once I 'Additional Drivers' application and then start FireFox. I know it seems weird, but this is the process I've got down to get the card to recognize wifi networks. After those applications are open for a few seconds, the card starts to function like normal (although maintaining the wifi connection is problem but most likely a seperate issue). The reason this is a problem is because this is supposed to just be a headless box managed through SSH. Here are the readouts from the common network diagnosis programs BEFORE I open 'Additional Drivers' and 'FireFox'. All commands were done with sudo. lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) - iwconfig eth1 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off - iwlist scan eth1 No scan results - Here are the various commands AFTER I open 'Additional Drivers' and 'FireFox' lspci 00:00.0 Host bridge: Intel Corporation 82G35 Express DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 82G35 Express PCI Express Root Port (rev 03) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation 82801IR (ICH9R) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1) 02:00.0 Ethernet controller: Qualcomm Atheros Attansic L1 Gigabit Ethernet (rev b0) 03:00.0 IDE interface: JMicron Technology Corp. JMB368 IDE controller 05:00.0 Network controller: Broadcom Corporation BCM4321 802.11b/g/n (rev 01) 05:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) - lshw *-network description: Ethernet interface product: Attansic L1 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: b0 serial: 00:22:15:00:a8:12 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1 driverversion=2.1.3 latency=0 link=no multicast=yes port=twisted pair resources: irq:46 memory:feac0000-feafffff memory:feaa0000-feabffff *-network description: Wireless interface product: BCM4321 802.11b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth1 version: 01 serial: 00:23:69:d8:2b:16 width: 32 bits clock: 33MHz capabilities: bus_master ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=6.30.223.141 (r415941) ip=192.168.1.103 latency=64 multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:febfc000-febfffff - ifconfig eth0 Link encap:Ethernet HWaddr 00:22:15:00:a8:12 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr 00:23:69:d8:2b:16 inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:69ff:fed8:2b16/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:11901 TX packets:132 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52641 (52.6 KB) TX bytes:19058 (19.0 KB) Interrupt:16 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6084 (6.0 KB) TX bytes:6084 (6.0 KB) - iwconfig eth1 IEEE 802.11abg ESSID:"BU" Mode:Managed Frequency:2.447 GHz Access Point: 00:26:F2:1F:81:02 Bit Rate=54 Mb/s Tx-Power=200 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=59/70 Signal level=-51 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 - iwlist scan A LOT OF SSIDs FOUND! - I'd like to have this problem fixed, but I'm not quite sure where to go. Been Googling a lot and can't seem to find anyone else with this problem.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >