Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 497/750 | < Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >

  • Why is my CPU being used while doing nothing?

    - by Jop
    I have installed Ubuntu GNOME in BIOS mode on my MacBook (BIOS mode so that the proprietary NVIDIA drivers work. I need them for gaming.). For some reason, a lot of CPU is being used while not really doing anything. It swings between 20-30% on both cores, usually. But when I look at the list of processes and sort by CPU usage, I do not see anything special. No processes intensively doing anything. How can I fix this? EDIT: Output of top command. jop@jop-MacBook:~$ top top - 17:08:02 up 41 min, 2 users, load average: 0,51, 0,69, 0,95 Tasks: 202 total, 2 running, 200 sleeping, 0 stopped, 0 zombie %Cpu(s): 11,9 us, 5,8 sy, 0,0 ni, 80,3 id, 0,5 wa, 0,0 hi, 1,5 si, 0,0 st KiB Mem: 7908316 total, 2919940 used, 4988376 free, 153248 buffers KiB Swap: 3906244 total, 0 used, 3906244 free, 1326544 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3785 root 20 0 195m 82m 26m S 22,9 1,1 2:43.77 Xorg 4429 jop 20 0 1543m 150m 60m S 7,3 1,9 1:26.26 compiz 4198 jop 20 0 633m 21m 11m S 1,7 0,3 0:04.96 unity-panel-ser 7425 jop 20 0 564m 18m 12m S 1,7 0,2 0:00.84 gnome-terminal 7019 jop 20 0 806m 89m 46m S 1,0 1,2 0:10.01 chrome 7323 jop 20 0 966m 93m 23m S 1,0 1,2 0:06.85 chrome 6742 root 20 0 0 0 0 S 0,7 0,0 0:00.43 kworker/0:3 3 root 20 0 0 0 0 S 0,3 0,0 0:06.01 ksoftirqd/0 7008 root 20 0 0 0 0 S 0,3 0,0 0:00.27 kworker/1:3 7302 jop 20 0 972m 96m 28m S 0,3 1,2 0:06.32 chrome 7310 jop 20 0 382m 63m 39m S 0,3 0,8 0:00.34 chrome 7498 jop 20 0 24840 1600 1120 R 0,3 0,0 0:00.22 top 1 root 20 0 27176 2944 1412 S 0,0 0,0 0:01.58 init 2 root 20 0 0 0 0 S 0,0 0,0 0:00.00 kthreadd 5 root 0 -20 0 0 0 S 0,0 0,0 0:00.00 kworker/0:0H 6 root 20 0 0 0 0 S 0,0 0,0 0:00.00 kworker/u4:0 7 root rt 0 0 0 0 S 0,0 0,0 0:02.04 migration/0 Even when xorg isn't so busy like when I copied, CPU usage is higher then what the processes use.

    Read the article

  • Thinkpad brightness steps error using FN+Home/End

    - by petermolnar
    I've met the following problem: normally my T400 (Lenovo Thinkpad) has 16 steps of brightness, and Windows utilizes it correctly. After a fresh install & minor tweaks Mint 12 (which is based on 11.10 Ubuntu) I only had 6 steps which was way to few. Listing /sys/class/backlight showed 3 entried. I removed the acpi-tools package, one of the disapperared - and I now have 10 steps! Therefore I think if I can reduce the entries to 1 I'm going to have 16 steps, since the stepping will be 1 instead of 2 (or 3). /sys/class/backlight/ intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/intel_backlight thinkpad_screen -> ../../devices/virtual/backlight/thinkpad_screen The problem is that I'm unable to trace back what are the configs / daemons / kernel options triggers these two. More strangely, I discovered a strange behaviour. I monitored watch -n1 "cat /sys/class/backlight/thinkpad_screen/actual_brightness" and watch -n1 "cat /sys/class/backlight/intel_backlight/actual_brightness" while changing the brightness with FN+home/end combinations from max to min. The outcome is the following: brighness intel thinkpad --------- ----- -------- MAX 2408475 7 | 1955115 5 | 1435640 3 | 1246740 1 | 1086175 0 | 1010615 6 | 859495 4 | 689485 2 v 481695 0 MIN 217235 0 brighness intel thinkpad --------- ----- -------- MIN 217235 0 | 481695 2 | 689485 4 | 859495 6 | 1010615 7 | 1086175 1 | 1246740 3 | 1435640 5 v 1955115 7 MAX 2408475 0 When stepping from MIN to MAX, there's no difference between the last 2 steps. Also, the OSD icon (Cinnamon desktop, default theme) goes from full to min in 4 steps and from full to min once again in 4 steps. So... it seems that the intel entry is working correctly, showing correct values. The thinkpad entry however twists the things and even showing incorrect values. Does anyone have any idea how to get rid of the thinkpad entry? System data: Linux Mint 12 3.0.0-16 kernel Lenovo ThinkPad T400 Cinnamon 1.4 desktop For any additional info, please tell me what do you need. EDIT I'm sorry, I forgot to mention, I added acpi_backlight=vendor to GRUB cmdline as well, this is the result of the semi-better working than the default.

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • System freezes while not in use, how do I fix this?

    - by PHLAK
    Bare with me, the following is a bit winded. I have Ubuntu 10.10 Desktop 64-bit installed on my laptop and up until a few weeks ago it has been running great. Then one day, while I was not using the laptop it froze. I was logged in as my user but had locked the screen locked and closed the lid. I didn't notice that it had frozen until I opened the lid and wiggled the mouse to try and log in. The screen remained black and I got no response. I immediately tried Alt + F2, F3, F4, etc. but got no response. The only thing I could do was hold the power button to power off the machine. The freezing has happened as quickly as within 10-20 minutes of the system being logged off and lid closed and as long as 4-6 hours. My machine is NOT configured to go into standby when plugged in and this has happened both on AC power and battery. Troubleshooting I have performed: I uninstalled programs I knew that I had installed between when it was working fine and having problems. Those programs were CrashPlan, Shutter and Conky. After uninstalling ALL of these programs the freezing still occurs. Next, I decided to SSH into the machine from my desktop and leave an htop and tail of the syslog running. Here are screenshots of the last thing shown on both when the system froze: htop, syslog Here is a dump of my syslog after another freeze. The freeze happened at 9:14 and I didn't notice it until about 10 minutes later and rebooted, hence the 10 minute gap from 9:14 to 9:24. In the above syslog dump I noticed a lot of NVRM: os_raise_smp_barrier(), invalid context! and upon investigating that message learned it was from the proprietary Nvidia driver I had installed. Thinking this could be part of the problem I uninstalled the Nvidia driver and reverted to using the Nouveau driver. The computer still froze after a few hours. Lastly, thinking the problem could be caused by overheating I used compressed air to blow out any dust in the CPU vents and all other openings on the laptop. None of the above troubleshooting has helped and the freezing still occurs. What other steps can I take to troubleshoot and/or fix this problem? Note: Yesterday X started to eat up a lot of CPU power and eventually froze my system while I was forwarding an X session over SSH (from another PC to my laptop). I'm unsure if this is related or not as it doesn't match any of the symptoms of the problem above. Aside from this, the system has never frozen while in use, even under heavy load. EDIT: I just ran Memtest86+ and it made it through two passes without any errors. Just eliminating possible causes here.

    Read the article

  • Alfa AWUS036H USB wireless adapter not recognized

    - by GFiasco
    The Alfa AWUS036H USB wireless adapter will not be recognized by my netbook (Ubuntu 14.04, Asus X201E). As I understand it, the drivers should already be built in to this version of Linux, but I tried a make/make install of the latest Realtek drivers (as mentioned on How do I install drivers for the Alfa AWUS036H USB wireless adapter?) and it didn't work. I then followed the advice of this thread (ALFA AWUS036NH driver) and did a make/make install of the most up-to-date backport of the drivers, but that didn't work. At this point I tried a series of commands from this thread (http://ubuntuforums.org/showthread.php?t=2187780) in an attempt to identify the problem, but at no point could I get the laptop to ever recognize the USB adapter. I have also troubleshot the USB cable itself, tried both the USB 2.0 and 3.0 ports on the laptop, have never received an error message regarding a need to update the firmware, and have seemingly successfully installed all manner of variation of Realtek drivers which were supposed to make the adapter work. (I also tried to delete/clean up after each install, in the hope I wasn't making things worse.) Not sure what I should do next. Please let me know if I need to post any more information. Thanks very much for your help. EDIT: Before inserting Alpha USB adapter: :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub After inserting Alpha USB adapter (USB 3.0 port, no change): :~$ lsusb Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 004: ID 0bda:570c Realtek Semiconductor Corp. Bus 001 Device 026: ID 13d3:3393 IMC Networks Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 03f0:3112 Hewlett-Packard Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Ran tail -f /var/log/syslog, inserted device, no recognition (last entry is dated 16:17:01, so an hour ago). Going to check on an Ubuntu 14.04 laptop and Windows XP desktop. I'll update after. Thanks for your help to this point.

    Read the article

  • Unity Dash and top toolbar won't open after updating to 12.10

    - by pgrytdal
    Today, I updated to Ubuntu 12.10. After re-starting, like the updater suggested, the toolbar on the top of the screen, and the dash won't load. I seem to be missing other features, as well, like alttab to switch windows, etc. I am able to access the Terminal, by typing CtrlAltT, which is how I was bale to access Firefox. How do I fix this problem? Edit: 2:10 PM on 10/19/12 As Chris Carter suggested, I'm including the results of the teminal command lspci (Sorry... I dont know how to format between Back-tics): 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Acer Incorporated [ALI] AMD RS780/RS880 PCI to PCI bridge (int gfx) 00:04.0 PCI bridge: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 3a) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration (rev 40) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Link Control 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] 01:05.1 Audio device: Advanced Micro Devices [AMD] nee ATI RS780 HDMI Audio [Radeon HD 3000-3300 Series] 03:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 09:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • How can I set the date format to my country setting?

    - by Jamina Meissner
    I am German, but I use only English software. Hence, I am also using English Ubuntu. It's not because I don't know how to install German Ubuntu. It's because I prefer to work with English software environment. However, I would like to keep date & time format in German format, just as I use a German keyboard layout in English Ubuntu. I can set the time format to 24h time. But how can I set the date format to German time format? It is irritating for me to have the day number before the time numbers: In other words, instead of "Oct 14 15:16" I want it to display "14 Okt" or (if only English language is available) "14 Oct 15:16" or "14th Oct 15:16". At least, the number of the day should be displayed before the month. In Windows, it was no problem to choose time/date/currency settings according to a chosen country. Where can I do this in Ubuntu? The best would be if I could freely enter the date/time format myself with variables (DD.MM hh.mm.ss etc). I found answers for Ubuntu 11.04, but not for Ubuntu 12.04. I am using Ubuntu 12.04, 64-bit. Keep in mind that I am a beginner. So I'd like to be able to do this via GUI, if possible. EDIT: I found the answer in a forum. Go to System Settings... and choose Language Support. There are two tabs, Language and Reginal Formats. You are by default on the Language tab. On the Language tab, click Install / Remove Languages. A window with a list of languages opens. Mark the language(s) you want to add for your time/date/currency format. Click Apply Changes. Ubuntu will now download and install the additional language files, as well as help files of other applications in this language. So don't be irritated. When Ubuntu has finished applying the changes, switch to Regional Formats tab. (Do not change the Language for menus and windows on the Language tab if you only want to change the date/time/unit format). There you can choose from the dropdown list the language for your preferred format for date/time/currency/unit. Log out and log in again to have the changes take effect.

    Read the article

  • How do I use IIS7 rewrite to redirect requests for (HTTP or HTTPS):// (www or no-www) .domainaliases.ext to HTTPS://maindomain.ext

    - by costax
    I have multiple domain names assigned to the same site and I want all possible access combinations redirected to one domain. In other words whether the visitor uses http://domainalias.ext or http://www.domainalias.ext or https://www.domainalias3.ext or https://domainalias4.ext or any other combination, including http://maindomain.ext, http://www.maindomain.ext, and https://www.maindomain.ext they are all redirected to https://maindomain.ext I currently use the following code to partially achieve my objectives: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="CanonicalHostNameRule" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^MAINDOMAIN\.EXT$" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> <rule name="HTTP2HTTPS" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> ...but it fails to work in all instances. It does not redirect to https://maindomain.ext when user inputs https://(www.)domainalias.ext So my question is, are there any programmers here familiar with IIS7 ReWrite that can help me modify my existing code to cover all possibilities and reroute all my domain aliases, loaded by themselves or using www in front, in HTTP or HTTPS mode, to my main domain in HTTPS format??? The logic would be: if entire URL does NOT start with https://maindomain.ext then REDIRECT to https://maindomain.ext/(plus_whatever_else_that_followed). Thank you very much for your attention and any help would be appreciated. NOTE TO MODS: If my question is not in the correct format, please edit or advise. Thanks in advance.

    Read the article

  • What is 'Ubuntu Unity' (for the Desktop)?

    - by Martin
    Ok, so there's the buzz of Canonical (wanting to) switch for new Ubuntu version from the GNOME default desktop to their own Unity shell. (I hope that's accurate.) It seems I can not totally fathom what Unity actually is. For looking at its homepage it currently is firmly targeted at netbooks and the somehow different usage model on these. Is it a classical desktop? -- Taskbar? Shortcuts? Is the difference between Ubuntu(GNOME)+Unity more/less pronounced than the difference between Ubuntu and Kubuntu? Will "my parents" be able to get the interface if they've been using the classical gnome desktop so far? Edit: I would not like to split this up into more specific questions, as What is Unity? is exactly what the people I set up Ubuntu boxes for will ask me if they hear that the newer Ubuntu version is using that instead of the Desktop -- and it might well happen someone phrases it like that :-) I will certainly not give them the link to the HP as the explanation there does not lay out if it is a desktop or something more or something less: (It does not for me - therefore I'm asking here.) Unity is designed for netbooks and related touch-based devices. It includes [...] that makes it fast and easy to access [...] while removing screen elements that are rarely used in mobile and netbook computing. (emphasis mine) -- the explanation there doesn't even mention the desktop-PC! Unity has a vertical task management panel on the left-hand side and a menu panel at the top of the screen. [...] This sounds like a re-themed normal desktop. Clicking on an icon will give the target application focus if it is already running or launch it if it is not already running. If you click the ... Aha. Sounds like Windows 7. ... icon of an application that already has focus, Unity will activate an Expose-style view of all the open windows associated with that application. No clue what that's supposed to be. So it would really be nice if someone could explain for non desktop-design-terms experts what Unity is.

    Read the article

  • No wireless connection using a conceptronic c54i (RT2561/RT61 rev B)

    - by jrosell
    Detected but not working. New install on ubuntu 11.10 using coneptronic C54Ri. As documentation says it uses Ralink drivers.... Any ideas why my wireless does not work? $ lspci -nn | grep -i 'ralink' 01:05.0 Network controller: Ralink corp. RT2561/RT61 rev B 802.11g ifconfig eth0 Link encap:Ethernet HWaddr 00:1e:90:e5:af:13 inet addr:192.168.0.197 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::21e:90ff:fee5:af13/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:28361 errors:0 dropped:0 overruns:0 frame:0 TX packets:16858 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:39812172 (39.8 MB) TX bytes:1633405 (1.6 MB) Interrupt:43 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:80 errors:0 dropped:0 overruns:0 frame:0 TX packets:80 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6608 (6.6 KB) TX bytes:6608 (6.6 KB) iwconfig wlan0 wlan0 IEEE 802.11abg ESSIDff/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thrff Fragment thrff Power Managementff lsmod | grep rt rt61pci 27493 0 crc_itu_t 12627 1 rt61pci rt2x00pci 14202 1 rt61pci rt2x00lib 48114 2 rt61pci,rt2x00pci mac80211 272785 2 rt2x00pci,rt2x00lib cfg80211 172392 2 rt2x00lib,mac80211 eeprom_93cx6 12653 1 rt61pci parport_pc 32114 1 parport 40930 3 ppdev,parport_pc,lp lsmod | grep rt [ 2497.816989] phy0 -> rt2x00pci_regbusy_read: Error - Indirect register access failed: offset=0x0000308c, value=0xffffffff [ 2497.827112] phy0 -> rt2x00pci_regbusy_read: Error - Indirect register access failed: offset=0x0000308c, value=0xffffffff [ 2497.837430] phy0 -> rt2x00pci_regbusy_read: Error - Indirect register access failed: offset=0x0000308c, value=0xffffffff [ 2497.847528] phy0 -> rt2x00pci_regbusy_read: Error - Indirect register access failed: offset=0x0000308c, value=0xffffffff [ 2497.847632] phy0 -> rt61pci_wait_bbp_ready: Error - BBP register access faile d, aborting. [ 2497.847637] phy0 -> rt61pci_set_device_state: Error - Device failed to enter state 4 (-5). sudo lshw -C network *-network DISABLED description: Wireless interface product: RT2561/RT61 rev B 802.11g vendor: Ralink corp. physical id: 5 bus info: pci@0000:01:05.0 logical name: wlan0 version: 00 serial: fa:b8:14:58:62:35 width: 32 bits clock: 33MHz capabilities: pm cap_list ethernet physical wireless configuration: broadcast=yes driver=rt61pci driverversion=3.0.0-12-generic firmware=0.8 latency=0 link=no multicast=yes wireless=IEEE 802.11abg resources: irq:16 memory:fdef8000-fdefffff iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan0 Failed to read scan data : Network is down uname -mr 3.0.0-12-generic i686 Edit 1 $ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no On reboot, sudo lshw -C network returns network is ok. Hovever, WPA keeps on asking the wireless key

    Read the article

  • What economic books would you suggest for learning about economic valuation of goods and simulations thereof?

    - by Rushyo
    I'm looking to create an economic model for a game based on goods created procedurally. Every natural resource and produced good would be procedurally generated, with certain goods being assigned certain uses. Fakesium might be used for the production of Weapon A and produced from Fakesium factories which use Dilithium and Widgets as reagents, where Widgets are also the product of Foo and Bar The problem is not creating the resources and their various production utlities - but getting the game's AI empires and merchants to (Addendum: somewhat) correctly value the goods according to their scarcity, utility and production costs. I need to create a simulation of goods which allows the various game factions to assign a common value denominator (credits) to each resource, depending on how much its worth to that empire. I see the simulation being something like: "I have a high requirement for Weapon A. Since I don't have much of Fakesium, which is needed for Weapon A - I must have a high demand for Fakesium. If I can acquire Fakesium, devalue it. If not, increase its value - and also increase demand for Dilithium and Widgets too." This is very naive - because it may be much much cheaper for the empire to simply purchase Dilithium and Widgets directly rather than purchasing Fakesium, for example. Another example is two resources might allow the creation of Weapon A (Fakesium and Lieron), so we'd need to consider that. I've been scratching my head over the problem and it keeps growing. By the time the player joins the world, I'd expect enough iterations of this process to have occurred that prices would have largely normalised - and would then only trigger rarely to compensate for major changes (eg. if the player blows up the world's only Foo mine!) Could anyone suggest resources (books, largely) which outline this style of modelling, preferably in the context of simulations? Since this problem would never occur outside fantasy worlds, I figured this is probably the most likely place to find people who have encountered similar problems and I'm sure there's people who know of good places for Games Developers to start looking at less specific economic theory too. Additionally, does anyone know of any developers with blogs whose games or research applications perform similar modelling? EDIT: I think I should underline that I'm not looking for optimal solutions. I'm looking to make the actors impulsive - making rudimentary decisions based on fuzzy inputs about what they care about or don't. I'm aiming to understand the problem area better not derive answers. All the textbooks I've found seem to be about real-world economics or how to solve complex theoretical problems, neither of which are terribly relevant to the actor's decision making.

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Contract / Project / Line-Item hierarchy design considerations

    - by Ryan
    We currently have an application that allows users to create a Contract. A contract can have 1 or more Project. A project can have 0 or more sub-projects (which can have their own sub-projects, and so on) as well as 1 or more Line. Lines can have any number of sub-lines (which can have their own sub-lines, and so on). Currently, our design contains circular references, and I'd like to get away from that. Currently, it looks a bit like this: public class Contract { public List<Project> Projects { get; set; } } public class Project { public Contract OwningContract { get; set; } public Project ParentProject { get; set; } public List<Project> SubProjects { get; set; } public List<Line> Lines { get; set; } } public class Line { public Project OwningProject { get; set; } public List ParentLine { get; set; } public List<Line> SubLines { get; set; } } We're using the M-V-VM "pattern" and use these Models (and their associated view models) to populate a large "edit" screen where users can modify their contracts and the properties on all of the objects. Where things start to get confusing for me is when we add, for example, a Cost property to the Line. The issue is reflecting at the highest level (the contract) changes made to the lowest level. Looking for some thoughts as to how to change this design to remove the circular references. One thought I had was that the contract would have a Dictionary<Guid, Project> which would contain ALL projects (regardless of their level in hierarchy). The Project would then have a Guid property called "Parent" which could be used to search the contract's dictionary for the parent object. THe same logic could be applied at the Line level. Thanks! Any help is appreciated.

    Read the article

  • How to optimise mesh data

    - by Wardy
    So i have some procedurally generated mesh data and i want to reduce it down to its minimum number of verts. In case it matters this is a unity project. Working on the basis of a simple example, lets assume a typical flat surface of points 2 by 3. The point / vertex at [1,1] is used in many triangles. I've generated mesh for a voxel type engine that adds verts to a list based on face visiblility and now I want to remove all the duplicates. Can anyone come up with an efficient way of doing this because what i have is sooo bad its not even funny (and i don't even think it's logically correct) ... private void Optimize() { Vector3 v; Vector3 v2; for (int i = 0; i < Vertices.Count; i++) { v = Vertices[i]; for (int j = i+1; j < Vertices.Count; j++) { v2 = Vertices[j]; if (v.x == v2.x && v.y == v2.y && v.z == v2.z) { for (int ind = 0; ind < Indices.Count; ind++) { if (Indices[ind] == j) { Indices[ind] = i; } else if (Indices[ind] > j && Indices[ind] > 0) Indices[ind]--; } Vertices.RemoveAt(j); Uvs.RemoveAt(j); Normals.RemoveAt(j); } } } } EDIT: Ok i managed to get this (code sample above updated) to render an "optimised" set of verts but the UV data is all wrong now, which would make sense because i'm basically just removing any UV Vector that represents a UV coord for a removed vert and not actually considering what I need to do to "fix the tri" so to speak. The code now seemingly does work but its quite time consuming, still looking to further optimise.

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • Why does my code dividing a 2D array into chunks fail?

    - by Borog
    I have a 2D-Array representing my world. I want to divide this huge thing into smaller chunks to make collision detection easier. I have a Chunk class that consists only of another 2D Array with a specific width and height and I want to iterate through the world, create new Chunks and add them to a list (or maybe a Map with Coordinates as the key; we'll see about that). world = new World(8192, 1024); Integer[][] chunkArray; for(int a = 0; a < map.getHeight() / Chunk.chunkHeight; a++) { for(int b = 0; b < map.getWidth() / Chunk.chunkWidth; b++) { Chunk chunk = new Chunk(); chunkArray = new Integer[Chunk.chunkWidth][Chunk.chunkHeight]; for(int x = Chunk.chunkHeight*a; x < Chunk.chunkHeight*(a+1); x++) { for(int y = Chunk.chunkWidth*b; y < Chunk.chunkWidth*(b+1); y++) { // Yes, the tileMap actually is [height][width] I'll have // to fix that somewhere down the line -.- chunkArray[y][x] = map.getTileMap()[x*a][y*b]; // TODO:Attach to chunk } } chunkList.add(chunk); } } System.out.println(chunkList.size()); The two outer loops get a new chunk in a specific row and column. I do that by dividing the overall size of the map by the chunkSize. The inner loops then fill a new chunkArray and attach it to the chunk. But somehow my maths is broken here. Let's assume the chunkHeight = chunkWidth = 64. For the first Array I want to start at [0][0] and go until [63][63]. For the next I want to start at [64][64] and go until [127][127] and so on. But I get an out of bounds exception and can't figure out why. Any help appreciated! Actually I think I know where the problem lies: chunkArray[y][x] can't work, because y goes from 0-63 just in the first iteration. Afterwards it goes from 64-127, so sure it is out of bounds. Still no nice solution though :/ EDIT: if(y < Chunk.chunkWidth && x < Chunk.chunkHeight) chunkArray[y][x] = map.getTileMap()[y][x]; This works for the first iteration... now I need to get the commonly accepted formula.

    Read the article

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Unable to ping inside or outside network with default gateway 0.0.0.0

    - by agentroadkill
    I've been around here before and I could usually piece together everything to more or less get myself up and running, but this time I'm truly stumped. I'm trying to connect my new 14.04 install to a network, and I'm forced to be behind my college's router. Now I've tested the vary cable that is right now plugged into my Ubuntu box on a Windows, Mac OS X, and even my friend's Ubuntu 14.04 box, and they all connect no problem. I've been trying to track this down for about two days, but every time I get close to it, the bug jumps to some other piece of my connection. Anyway, as it sits ifconfig -a gives: eth2 Lninkencap:Ethernet HWaddr:00:1f:bc:08:31:1d inet addr:10.32.51.51 Bcast:10.32.51.155 Mask: 255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 RX bytes:0 TX bytes:0 as well as the local loopback, but I'm assuming that is not an issue here. sudo dhclient -v eth2 returns: Listening on LPF/<hardware address of my integrated NIC, above> Sending on <same> Sending on Socket/fallback DHCPREQUEST of 10.32.51.51 on eth2 to 255.255.255.255 port 67 (xid=0x6f4a66ba) <two more lines of same> DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x156f9fb4) <many more of above with varying intervals> No DHCPOFFERS received. Trying recorded lease 10.32.51.51 RTNETLINK answers: File exists bound: renewal in <large number> seconds If I then try ping 8.8.8.8, I get: connect: Network is unreachable /etc/resolv.conf only contains the two lines telling you not to edit it, while /etc/network/interfaces only has the loopback interface block in it. I've tried commenting out the "option rfc3442" line in /etc/dhcp/dhclient.conf which seemed to fix this issue for many people, as well as adding the line send vendor-class-indentifier "MSFT5.0" to dhclient.conf as well to tell the router I'm a windows box, in case they don't like Linux. Finally, route -n reveals: Destination Gateway Genmask Flags Metric Ref Use Iface 10.32.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 I would like to apologize in advance for the doubtless butchered text alignment, but I'm obviously typing this all by hand, reading from the terminal as I type commands. I'm hoping this is an interesting problem, and not something I blithely stumbled past in my (apparent) over-confidence. TIA! Quick addendum before posting: The activity light on the ethernet port are lit and one blinks during boot, but they rarely (and seemingly randomly) do so afterwards (both are dark) even while running dhclient in the foreground. When I had the Ubuntu box tethered to my MacBook earlier, I got what looked like a normal power/uplink blinking pattern, but was unable to ping one from the other.

    Read the article

< Previous Page | 493 494 495 496 497 498 499 500 501 502 503 504  | Next Page >