Search Results

Search found 996 results on 40 pages for 'jim anderson'.

Page 10/40 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Ubuntu 12.04 and Nvidia GTX 550 Ti

    - by Jim
    OK I'm currently trying to install Ubuntu x64 Server 12.04 onto the following machine: Intel Core i5-2320 3.0GHz LGA1155 6MB 8 GB DDR3 RAM, Gigabyte Z68P-DS3 S1155 Intel Z68 DDR3 ATX M/B, OCZ 60 GB SSD, 3x Samsung 2TB drives in a RAID 5 array (via M/B) Now what I think is causing the issue is the following: EVGA GeForce GTX 550 Ti 951MHz 1GB PCI-Express HDMI FPB As the server CD works in text mode I haven't had a problem with actually installing Ubuntu. Partioned with: 1GB /boot SSD, 59GB / SSD, 10GB swap RAID5, ~4TB /home RAID5 On a straight boot, you briefly see the GRUB menu, followed by a blank screen. The keyboard and mouse blink as they are initialised but no sign of life from the screen. Followed by a bit of research (otherwise known as google) ... Booted in quiet splash nomodeset Now I have a fully working linux distro at the command prompt. I then proceed to try and update the nvidia drivers with apt-get (after updating repositories etc) and rebooting. Still the same problem. I also tried reinstalling from the CD and installing said drivers in the install process before GRUB was installed, still the same symptoms. Does anybody have any solutions? I'm at my wits end here, I bought this machine to be a linux server / tinkering machine and have just spent 4-5 hours trying to just get a basic install working.

    Read the article

  • Is there a precedent for the license on a compiler restricting the kind of development you can use it for?

    - by Jim McKeeth
    It was recently let slip that the new EULA for Delphi XE3 will prohibit Client Server development with the Professional edition without the additional purchase of a Client Server license pack. This is not to say the Professional version will lack the features, but the license will specifically prohibit the developer from using the compiler for a specific class of development, even with 3rd party or home grown solutions. So my question is if there is a precedent of a compiler or similar creative tool prohibiting the class of work you can use it for. Specifically a commercially licensed "professional" tool like Delphi XE3. Also, would such a restriction be legally enforceable? I know there have been educational edition or starter edition tools in the past that have restricted their use for commercial purposes, but those were not sold as "professional" tools. Also I know that a lot of computing software and equipment will have a disclaimer that it is not for use in "life support equipment" or "nuclear power" but that is more of avoiding liability than prohibiting activity. Seems like I recall Microsoft putting a restriction in FrontPage that you couldn't use it to create a web site that reflected poorly on Microsoft, but they pulled that restriction before it could be tested legally.

    Read the article

  • How does PhP internally represent string?

    - by Jim Thio
    UTF8? UTF16? Does the string in PhP also keep tracks the encoding used for that string? A good answer would give me a sample of Let's look at this script for example. Say I do $original = "??????????????"; What actually happen? Obviously I think $original will not contain just 7 characters. Those glyps must each be represented by several bytes there. Then I do $converted= mb_convert_encoding ($original , "UTF-8") What will happen to $converted? How will $converted be different than $original? Will it be just the exact same byte sequence with $original but with different encoding? Or what?

    Read the article

  • Is it a good programming practice to have a class with several .h files?

    - by Jim Thio
    I suppose the class have several different interfaces. Some it shows to some class, some it shows to other classes. Are there any good reason for that? One thing I can think of is with one .h per class, interface would either be public or private. What about if I want some interface to be available to some friends' class and some interface to be truly public? Sample: @interface listNewController:BadgerStandardViewViewController <UITableViewDelegate,UITableViewDataSource,UITextFieldDelegate,NSFetchedResultsControllerDelegate,UIScrollViewDelegate,UIGestureRecognizerDelegate> { } @property (nonatomic) IBOutlet NSFetchedResultsController *FetchController; @property (nonatomic) IBOutlet UITextField *searchBar1; @property (nonatomic) IBOutlet UITableView *tableViewA; + (listNewController *) singleton; //For Easier Access -(void)collapseAll; -(void)TitleViewClicked:(TitleView *) theTitleView; -(NSUInteger) countOfEachSection:(NSInteger)section; @end Many of those public properties and function are only ever called by just one other classes. I wonder why I need to make them available to many classes. It's in Objective-c by the way

    Read the article

  • How do I set Ctrl+Alt+something for Audacious global hokeys?

    - by Jim
    I want to set up global hotkeys in Audacious as follows. When I try and set these up it doesn't allow two mask keys. Ctrl+Alt+Insert for play. Ctrl+Alt+Home for pause. Ctrl+Alt+End for stop. Ctrl+Alt+Page Up for previous track. Ctrl+Alt+Page Down for next track. Ctrl+Alt+Up Arrow for volume up. Ctrl+Alt+Down for volume down. Ctrl+Alt+Left for skip back. Ctrl+Alt+Right for skip forward.

    Read the article

  • Ubuntuone Preferences does not show usage, name, e-mail, current plan

    - by Jim
    Ubuntuone is correctly synchronizing selected files between two computers running Ubuntu 10.10. When I open Ubuntuone Preferences, Account tab, on one computer it does not display the usage, name e-mail or current plan. On the other computer all information is shown correctly. On the Devices tab the 2 computers are not shown. They do show correctly on the other computer. Any ideas on how to fix this problem. I have reinstalled Ubuntuone per this link: https://wiki.ubuntu.com/UbuntuOne/FAQ/HowDoICompletelyRemoveAndReinstallUbuntuOne

    Read the article

  • Is there a recommended approach for using SQL Server as an Authorization store and extending AD properties using .Net? [closed]

    - by Jim
    We are going to be using SQL Server as an authorization store for our .Net windows services and WCF services as well as storing additional metadata about users and groups to extend the AD properties. Doing this will make this self service and not require IT to change anything for our department (for users or groups). What if any are the existing recommended stategies or technologies that do this function?

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • Nvidia driver installation results in resolution change to 800x600 with Xubuntu 10.04 (can't change)

    - by Jim Michael
    I have a 2.8 P4 and a Nvidia FX 5500 AGP graphics card. I've installed Xubuntu 10.04. It is WAY too laggy with the default o/s driver. Installing the Nvidia 173 driver, modaliases and nvidia-settings packages via synaptic package manager results in the following error message: An error occurred, please run Package Manager from the right click menu or apt-get in a terminal to see what is wrong. Error: Opening the cache (E::read, still have 11898251 to read but none left, E: The package lists or status file could not be parsed or opened.) This usually means that your installed packages have unmet dependencies. When restarting the PC the resolution drops to 800x600 (from the monitors native 1440x900). Nvidia settings cannot be changed either from the Xfce menu or Nvidia Xserver. Nvidia Xsever gives the following error message: You do not appear to be using the Nvidia X driver. Please edit your x configuration file (just run 'nvidiaxconfig' as root ) and restart X server. Also, I can't find anything in any directory called xorg.

    Read the article

  • Can anyone point me to some open source directX rendering engines or frameworks? [on hold]

    - by Jim
    I'm completely new to graphics API programmming, but not at all new to the theory and principle operation of game engines and rendering engines. That being said, I want to do some experiments of rendering very dense geometry scenes in a basic rendering engine or game engine. I don't need a lot of bells and whistles. What I need is enough control that I can implement my own scene graph algorithms and control the rendering pipeline very specifically. My ideal candidate engine would be either a rendering engine or game engine with a modular design that might be ready to go out of the box but would be simple enough in case I need to rip out some of the guts in the rendering management and implement my own. It's a tough call because I'm right at the level where it's almost better to go from scratch, but there's no sense in having to build every single basic thing such as heirarchical transforms, etc. I just want to work with rendering optimization to push dense geometry for maximum FPS. Does anyone have a suggestion for an engine or basic framework to use? I requested DirectX in my title because I figured it would likely be better supported and less likely for me to run into some obscure less-documented problem. But OpenGL might be acceptable if the recommended framework was definitely better than my other options. EDIT: I should add that I really want GPU tessellation support (part of adding to the density of geometry detail).

    Read the article

  • Using Drupal to build a directory listing? [on hold]

    - by Jim
    I am trying to create a form inside Drupal that will allow me to create a directory similar to this diagram: http://i.imgur.com/EtChBbG.jpg I tried looking into HTML tables but it is too basic for what I'm trying to do. How do I create a directory that can archive data in an alphabetical ordering? It also has to be able to sort by letters and other categories. Does anyone have an idea of how I should go about doing this? Thanks!

    Read the article

  • usb keyboard stopped working in terminal window after update from 13.04 to 13.10

    - by Jim
    When I start the computer, I am logged in automatically. Using the keyboard, I type Ctrl+Alt+t, which brings up a terminal window, just like it should; however, nothing happens when I attempt to type into the terminal. If I change over to the guest account, I can type into the terminal, but (different problem here, I'll ask it separately unless someone is kind enough to answer it here) my password doesn't work for sudo or anything else. Elsewhere I read that Language support could fix it, but that won't accept my password either - and no, I haven't changed it lately. Also, my Plex Media Server seems to have disappeared. I'll re-install everything if necessary, but I sure would rather avoid that if possible

    Read the article

  • In C++ Good reasons for NOT using symmetrical memory management (i.e. new and delete)

    - by Jim G
    I try to learn C++ and programming in general. Currently I am studying open source with help of UML. Learning is my hobby and great one too. My understanding of memory allocation in C++ is that it should be symmetrical. A class is responsible for its resources. If memory is allocated using new it should be returned using delete in the same class. It is like in a library you, the class, are responsibility for the books you have borrowed and you return them then you are done. This, in my mind, makes sense. It makes memory management more manageable so to speak. So far so good. The problem is that this is not how it works in the real world. In Qt for instance, you create QtObjects with new and then hand over the ownership of the object to Qt. In other words you create QtObjects and Qt destroys them for you. Thus unsymmetrical memory management. Obviously the people behind Qt must have a good reason for doing this. It must be beneficial in some kind of way, My questions is: What is the problem with Bjarne Stroustrups idea about a symmetrical memory management contained within a class? What do you gain by splitting new and delete so you create an object and destroy it in different classes like you do in Qt. Is it common to split new and delete and why in such case, in other projects not involving Qt? Thanks for any help shedding light on this mystery!

    Read the article

  • Data recovery on a Samsung SV1203N HDD

    - by Jim Conace
    Let me start by saying that I am a beginner in Ubuntu, but pretty knowledgeable with Windows. I am having a strange problem trying to recover data from a Samsung SV1203N HDD (120 GiB). I have tried numerous things in both Windows 7 and Ubuntu 12.04 LTS (on a flash drive, an external HDD and a Live CD). This drive has some very important data on it and I am praying some of you Ubuntu geeks can help me get it back. Here's my problem. The HDD is clicking while I am booting, but it stops when I get into Ubuntu or Windows. It refuses to be detected in the bios, so I cant perform any tests on it. I have tried numerous things in both Windows (repair CD, Jumpers, etc.) and Ubuntu (Boot-Fix, GParted, Testdisk, Photorec, forcing a mount, etc.). But it all seems to lead me back to the fact that the drive is not being recognized in the BIOS. I've even tried chilling the drive in the fridge, which worked well for another drive I work on, and I recovered all of the data in Ubuntu flawlessly. I am assuming that since the drive stops clicking when I get into the OS' that there is hope for recovery. I am going to try an IDE/SATA to USB cable, and replacing the Logic Board, but I want to exhaust all other possibilities before I do that. Any help would be greatly appreciated Bye

    Read the article

  • Windows Installer using usb drive for temp purposes

    - by Douglas Anderson
    When installing apps that are built around Windows Installer, it would appear that it often uses my external usb hard disk (when it's connected) as the temp location while it expands and installs the application (creates a folder off the root with a guid name). Is there anyway to change this so it always defaults to a specific drive? This appears to be the case on Windows Vista and 7, not sure about previous releases. EDIT: Current environment variables look like this: TEMP=C:\Users\<me>\AppData\Local\Temp TMP=C:\Users\<me>\AppData\Local\Temp EDIT: I have a funny suspicion that it's using the drive with the largest available free space.

    Read the article

  • Exim - send certain "local" user mail to smtp

    - by Ryan Anderson
    I'm using Exim 3 and would like to know how to send some local addresses to the smtp server instead of Exim handling them as a localuser. They are local addresses in the sense that they have the same domain as listed in 'local_domains' in exim.conf. I tried using the "require_files" option on the localuser director in exim.conf, but with no luck. Any help appreciated. Thanks, Ryan

    Read the article

  • Automatically Applying Security Updates for AWS Elastic Beanstalk

    - by Eric Anderson
    I've been a fan of Heroku since it's earliest days. But I like the fact that AWS Elastic Beanstalk gives you more control over the characteristics of the instances. One thing I love about Heroku is the fact that I can deploy an app and not worry about managing it. I am assuming Heroku is ensuring all OS security updates are timely applied. I just need to make sure my app is secure. My initial research on Beanstalk shows that although it builds and configures the instances for you, after that it moves to a more manual management process. Security updates won't automatically be applied to the instances. It seems there are two areas of concerns: New AMI releases - As new AMI releases hit it seems we would want to run the latest (presumably most secure). But my research seems to indicate you need to manually launch a new setup to see the latest AMI version and then create a new environment to use that new version. Is there a better automated way of rotating your instances into new AMI releases? In between releases there will be security updates released for packages. Seems we want to upgrade those as well. My research seems to indicate people install commands to occasionally run a yum update. But since new instances are created/destroyed based on usage it seems that the new instances would not always have the updates (i.e. the time between the instance creation and the first yum update). So occasionally you will have instances that aren't patched. And you are also going to have instances constantly patching themselves until the new AMI release is applied. My other concern is that perhaps these security updates haven't gone through Amazon's own review (like the AMI releases do) and it might break my app to automatically update them. I know Dreamhost once had a 12 hour outage because they were applying debian updates completely automatically without any review. I want to make sure the same thing doesn't happen to me. So my question is does Amazon provide a way to offer fully managed PaaS like Heroku? Or is AWS Elastic Beanstalk really more of just a install script and after that you are on your own (other than the monitoring and deployment tools they provide)?

    Read the article

  • Running Safari from the command line adds current directory to the URL

    - by Charles Anderson
    I am trying to run the Safari browser (on Mac OS 10.4) from the command line, as follows: /Applications/Safari.app/Contents/MacOS/Safari http://localhost/dev/myfile.html However, Safari starts up and tries to access file:///Users/charlesanderson/scripts/http://localhost/dev/myfile.html /Users/charlesanderson/scripts happens to be my current directory. Can someone explain why Safari does this? Firefox is much better behaved?

    Read the article

  • What configuration changes can I make to speed up extremely slow Windows VM's in ESXi 4.0.

    - by Shawn Anderson
    I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values. Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain. Dell PowerEdge T310 Xeon X3460 2.80 GHz 32 GB RAM 1 HD (2 TB) I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing. 1 Windows Server 2008 R2 (64-bit) - 4 GB RAM 1 Windows 7 (32-bit) - 2 GB RAM 1 Vista (32-bit) - 1 GB RAM 3 XP (32-bit) - 1 GB RAM Over to you! Thanks - Shawn

    Read the article

  • How well will ntpd work when the latency is highly variable?

    - by JP Anderson
    I have an application where we are using some non-standard networking equipment (cannot be changed) that goes into a dormant state between traffic bursts. The network latency is very high for the first packet since it's essentially waking the system, waiting for it to reconnect, and then making the first round-trip. Subsequent messages (provided they are within the next minute or so) are much faster, but still highly-latent. A typical set of pings will look like 2500ms, 900ms, 880ms, 885ms, 900ms, 890ms, etc. Given that NTP uses several round trips before computing the offset, how well can I expect ntpd to work over this kind of link? Will the initially slow first round trip be ignored based on the much different (and faster) following messages to/from the ntp server? Thanks and Regards.

    Read the article

  • nginx - Redirect specific page paths to https while keeping everything else on http (in a single server call)?

    - by Kris Anderson
    From what I've gathered so far it's clear that running if statements in nginx should be avoided at all costs. Most of the examples I've found so far regarding specific page redirects involve multiple servers being used. But, isn't that a bit wasteful? I'm not sure, but I would think multiple servers to accomplish this would be somewhat slower then a single server when under heavy load. My current server call is this: server { listen 10.0.0.60:80; listen 10.0.0.60:443 default ssl; #other code } What I want to do is redirect certain http requests to https requests. For example, I want /login/ and /my-account/ to always be forced to use SSL. If you're on /help/ though, I want that served over the default http. Is there a way to accomplish this within a single server call? Or is there no downside to using 2 server calls to get this working? nginx seems to be under pretty active development and a lot of the older guides I've followed were from times when you couldn't listen to requests for port 80 and 443 within the same server call. But now that nginx has been updated to support that (I'm running 1.2.4), I'm wondering if there's a "best practice" way of handling this today. Any help would be greatly appreciated. EDIT: I did find this guide: http://redant.com.au/blog/manage-ssl-redirection-in-nginx-using-maps-and-save-the-universe/ and I updated my code as follows: map $uri $my_preferred_proto { default "http"; ~^/#/user/login "https"; } server { listen 10.0.0.60:80; ## listen for ipv4; this line is default and implied listen 10.0.0.60:443 default ssl; if ($my_preferred_proto = "none") { set $my_preferred_proto $scheme; } if ($my_preferred_proto != $scheme) { return 301 $my_preferred_proto://mysite.com$request_uri; } It's not working though. When I change the default to https everything is redirected to SSL so it does somewhat work. But the redirect of /#/user/login is not redirecting to HTTPS. Any ideas? Also, is this a good way to go about this?

    Read the article

  • How to create a readonly root linux: Can be mounted as writeable for persistent changes?

    - by Mr Anderson
    I'd like a read only file system that runs almost entirely in RAM but the compact flash or hardrive can be mounted and made writeable to make persistent changes. How do I do this on Linux? I've looked at several tutorials but none really explain how to create such a system with the option of being able to mount the storage device and make persistent changes. I looked at this so far: http://chschneider.eu/linux/thin_client/ I also looked on the old gentoo wiki but the article was very specific to Gentoo. I'll be using a debian based Linux but it would be nice I've someone could explain to me how to do this in pretty generic instructions ,that would work on any Linux distro. Thanks.

    Read the article

  • Green System Administrator looking for helpful tips

    - by Joshua Anderson
    I have just been promoted to Systems Administrator for our product. We are designing a application that communicates with the cloud(Amazon EC2). I will be in charge of maintaining all Instances and their underlying components. So far this involves a set of load balanced services instances that connect to a central DB in a multi-tennant DB design. Im interested in what other Sys. Admins have discovered as invaluable tools or practices. Any resources provided will be greatly appreciated.

    Read the article

  • Can I cancel a resize operation in GParted without causing data loss?

    - by Anderson Green
    I'm currently waiting for GParted to finish resizing a partition, but the progress bar is currently at 0, and it's been taking much longer than usual (perhaps an hour). Is it safe to cancel the resize operation? I don't want to wait days for the resize operation to complete, but I don't want to lose all of my files either. (Is there any way that I can simply pause the resize operation, attempt to recover files, and then resume the resize operation?) (An update: the operation has finally completed, and my files are still intact!)

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >