Search Results

Search found 71953 results on 2879 pages for 'work environment'.

Page 189/2879 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • Is it safe to delete "Account Unknown" entries from Windows ACLs in a domain environment?

    - by Graeme Donaldson
    It's not uncommon to see entries in Windows ACLs (NTFS files/folders, registry, AD objects, etc.) with the name "Account Unknown (SID)". Obviously these are because of old AD users or groups which at some point had permissions manually configured on the relevant object and have since been deleted. Does anyone know if it is safe to remove these "Account Unknown" ACEs? My gut feeling is that it should be just fine, but I'm wondering if anyone has any past experiences where doing this has caused trouble? Normally I just ignore these, but the company I'm working at now seems to have an abnormal number of these, most likely due to past admins' inexperience with AD/Windows and assigning permissions to user accounts rather than groups in all sorts of weird places. FWIW, our environment is not complex, a single domain forest, 4 DCs in 3 sites, with all network connectivity and replication healthy, so I'm certain that these "Account Unknown" entries are really old accounts, and not just because of some failure to resolve the SID to a human-readable name.

    Read the article

  • What ways are there for cleaning an R environment from objects?

    - by Tal Galili
    I know I can use ls() and rm() to see and remove objects that exist in my environment. However, when dealing with "old" .RData file, one needs to sometimes pick an environment a part to find what to keep and what to leave out. What I would like to do, is to have a GUI like interface to allow me to see the objects, sort them (for example, by there size), and remove the ones I don't need (for example, by a check-box interface). Since I imagine such a system is not currently implemented in R, what ways do exist? What do you use for cleaning old .RData files? Thanks, Tal

    Read the article

  • Virtual environment firewall with CSF + iptables rules on VM?

    - by luison
    We are getting into virtualization with a Proxmox VE (OpenVZ + KVM) server. Our plan for firewall is to have CSF (http://configserver.com/cp/csf.html) running on the host machine as we've had a reasonable good experience with it in the past. Apart from that we plan simple firewall rules on the VM machines (mostly OpenVZ containers with same kernel) and maybe fail2ban simple specific rules. I would appreciate comments with anyone with similar experiences? I understand all traffic comes via the host machine so a combined firewall there with specific firewalling on the VM should work, alltough some iptables rules are hard to get to work on OpenVZ containers.

    Read the article

  • How do you set up DNS in Window Server 2008 in a Hyper-V environment?

    - by Nathan DeWitt
    I have a laptop running Server 2008 and Hyper-V. I have created a virtual machine that is also running Server 2008, that I used dcpromo to create as a domain controller. I disabled IPv6 because I had no idea how to enter a default address, and I just wanted to make a standalone MOSS dev environment. I have tried every combination of creating a virtual network on the host and then connecting to that in the VM, but I can't get the VM to communicate with the host and vice versa. No pinging, no copy and paste, nothing. Thanks. To update: My VM (which is its own DC) currently does not have a static IP. When I set the IP to static, I could not find anything that would let it talk to the host machine.

    Read the article

  • How are my DNS entries safe in a shared hosting environment?

    - by Jake
    I'm trying to understand how DNS works in a shared hosting environment. I went to my registrar and set my name servers to my host's ns1.foo.com and ns2.foo.com. I'm using a cloud hosting provider who has a web portal where I can set my DNS entries. However I am confused by the lack of security. when I entered in the entries for my domain there was never any step to prove that I actually own that domain. What is to stop somebody else on the same hosting service (a nasty neighbor) from writing over my DNS entries and pointing my traffic to their server instead?

    Read the article

  • how to create stub DNS zone for emulating my customer production environment ?

    - by Albert Widjaja
    Hi All, Is it possible to emulate my customer production environment inside my AD domain by just creating the same domain inside my primary DNS server ? Can I created mycustomer.com DNS zone (STUB) just for the sake of listing few database servers and application servers and then for the other DNS records eg. MX, NS and the other refer to the REAL MX record entry so that my Exchange Server email flow is unaffected to mycustomer.com ? because if I just create A record in my current domain for some of the servers, the FQDN is not exactly what I want. Thanks.

    Read the article

  • does a ruby on rails rack class get access to the entire rails environment?

    - by Andrew Arrow
    that is when the def call(env) method is invoked by hitting any url, can I inside that method make some ActiveRecord queries, use classes defined in lib, etc. etc. Or is it more like an irb console without the rails env loaded? Another way to put it with a rake task example: task :foo => :environment do # with env end task :foo2 do # without env end I would think rack classes would NOT get the environment so they are super fast and don't take all the overhead of a normal rails request. But that doesn't seem to be the case. I CAN make ActiveRecord queries inside my rack class. So what is the advantage of rack then?

    Read the article

  • where does windows vista hide the path environment variable?

    - by Bec
    i think that's what i need? i'm not sure i'm trying to run a command line program (BLAST, from NCBI) but it won't recognise the commands (blastall, formatdb, etc.) so i think i need to add the folder the bin is in to the path environment variable? i think that's what i need to do? i think that's what it's called? I think i've been shown this a few times, but i don't need to do it often, so i keep forgetting.

    Read the article

  • Would requiring .Net 4.0 act as a bar to adopting our software in a corporate environment?

    - by Sam
    We are developing a software product in .Net targeting large corporates. The product has both server and desktop client components. We anticipate that our product will be used by a small subset of workers in the corporation - probably those in the Finance function. We currently require .Net 3.5 but are considering moving to .Net 4. Could anyone with experience of managing IT in such an environment tell us whether requiring .Net 4.0 at this stage would be a bar to adopting our software? What attitudes prevail regarding the use of frameworks like .Net?

    Read the article

  • Choosing a monitoring system for a dynamically scaling environment: Nagios v. Zabbix

    - by wickett
    When operating in the cloud and scaling boxes automatically, there are certain monitoring issues that one experiences. Sometimes we might be monitoring 10 boxes and sometimes 100. The machines will scale up and down based on a demand. Right now, I think the best solution to this is to choose a monitoring solution that will instantiation of targets via calls to an API. But, is this really the best? I like the idea of dynamic discovery, but that is also a problem in the cloud seeing that the targets are not all in the same subnet. What monitoring solutions allow for a scaling environment like this? Zabbix currently has a draft API but I have been unable to fund a similar API for Nagios. Is there a similar API for Nagios? Anyone have any alternate suggestions besides Nagios and Zabbix?

    Read the article

  • Is Cygwin the best Unix environment for Windows? [closed]

    - by nik
    Which Unix like environment do you prefer on Windows? I have found Cygwin to be very comfortable for a Windows platform (usually XP). I am wondering if there is a better alternative (not because I want to move away from Cygwin). What are the features of Cygwin that you like OR, What are features you find in alternatives that you miss in Cygwin? I am often miss binary compatibility of applications built on Cygwin. These cannot be run directly on another Windows platform. But, usually fetching a copy of cygwin1.dll suffices. A collection of other tools, many of which work directly on the Windows subsystem rather than emulating Unix, like Cygwin does: Have been referred PowerShell a lot of times for scripting on Windows Earlier, UnixUtils was suggested more often Microsoft Windows Services for Unix

    Read the article

  • How to check the OS is running on bare metal and not in virtualized environment created by BIOS?

    - by Arkadi Shishlov
    Is there any software available as a Linux, *BSD, or Windows program or boot-image to check (or guess with good probability) the environment an operating system is loaded onto is genuine bare metal and not already virtualized? Given recent information from various sources, including supposed to be E.Snowden leaks, I'm curious about the security of my PC-s, even about those that don't have on-board BMC. How it could be possible and why? See for example Blue Pill, and a number of papers. With a little assistance from network card firmware, which is also loadable on popular card models, such hypervisor could easily spy on me resulting in PGP, Tor, etc. exercises futile.

    Read the article

  • How to decrease front end development time in a company/team environment?

    - by metal-gear-solid
    How to decrease front end development time in a company/team environment? My company is asking to suggest idea to make front end development process faster? Some points I realized main problem is client never provide right information at first time and many front end developer works on same project on same CSS so everyone makes his own method sometimes. It increase time of process. Graceful degradation and progressive enhancement both takes time to think and development. should we think about it? it increase the project cost. How to judge time estimation by just seeing a PSD for to make PSD in Cross browser Compatible XHTML CSS. Most of the time I always give less time then then takes more time. Any other suggestions to improve work efficiency in a team (50 people) environment?

    Read the article

  • Is it possible to have environment variables in the path of the working directory : PS1?

    - by mthpvg
    I am on Lubuntu and I am using bash. My PS1 (in .bashrc) is : PS1="\w> " I like it because I need to paste the working directory all the time. The problem is that the path is always very long and since I use terminator I only have half of my screen's width available to display it... it is ugly and annoying. My command prompt looks like that : /this/is/a/very/long/path/that/i/want/to/make/shorter > I'd like to set in my environment variables : $tiavl=/this/is/a/very/long And then I'll get : $tiavl/path/that/i/want/to/make/shorter > The goal is to have something shorter in the command prompt but I still want to be able to copy paste it and do : cd $tiavl/path/that/i/want/to/make/shorter It is a bit like with $HOME : ~/path/that/i/want/to/make/shorter > I know where I am and I can copy paste the ~. Thanks.

    Read the article

  • How to fix the “Live INT automatically logs out”

    - by ybbest
    Problem: Live INT environment automatically logs out I am trying to setup the Authentication with Windows Live ID and followed this blog post ; I have a problem logging in to live INT web site. Whenever I try to log in (https://login.live-int.com/login.srf  this is the internal Live environment to be used in a dev. environment.), after entering valid email/password I get redirected to the logout page. I tried 2 different accounts (one with existing email address, and other one with newly created @hotmail-int.com address) and 3 different browsers so I’m sure that neither account nor the browser are the cause of this. I also tried to enter wrong password, and in that case I get the message that the password is wrong. Solution: All you need is the unique ID in order to add the user to SharePoint , you can get the ID without logging into the Live INT environment. I think the Live internal environment is not working correctly for some reasons , the reason I need to login to the Live internal environment is that I need to get the unique ID for the test account so that I can add the user to SharePoint. All the blogs I have come across require you to login in order to get the unique ID. However, I figured out another way of getting the unique ID without logging in. Steps are below: Register a new test account in the Live internal environment. Go to the SharePoint site collection that has  Live ID authentication enabled and select the LiveID INT(it will be different as you could name it differently when you set up the authentication provider) from the dropdown. Try login using the Internal Live account, you will get an Access Denied Error as below showing your  unique ID for the test account. Add that account to your SharePoint Group, boom, it works. I hope it will help anyone who needs to do this stuff in the future.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Like the work, like the pay but not comfortable with environment around. Do I change company or stay patient? [closed]

    - by essbeev
    I do like the kind of work I do in our company. I also like the compensations. But lately, something in work environment makes me uncomfortable, to such an extent that, for instance, after a week off from work - even if totally exhausted by other activities; I get healthier. What move I make so that both my career and my health get along well? How do I use this situation for betterment ? Is it advisable to change the company in such a case ?

    Read the article

  • Make it simple. Make it work.

    - by Sean Feldman
    In 2010 I had an experience to work for a business that had lots of challenges. One of those challenges was luck of technical architecture and business value recognition which translated in spending enormous amount of manpower and money on creating C++ solutions for desktop client w/o using .NET to minimize “footprint” (2#) of the client application in deployment environments. This was an awkward experience, considering that C++ custom code was created from scratch to make clients talk to .NET backend while simple having .NET as a dependency would cut time to market by at least 50% (and I’m downplaying the estimate). Regardless, recent Microsoft announcement about .NET vNext has reminded me that experience and how short sighted architecture at that company was. Investment made into making C++ client that cannot be maintained internally by team due to it’s specialization in .NET have created a situation where code to maintain will be more brutal over the time and  number of developers understanding it will be going and shrinking. Not only that. The ability to go cross-platform (#3) and performance achievement gained with native compilation (#1) would be an immediate pay back. Why am I saying all this? To make a simple point to myself and remind again – when working on a product that needs to get to the market, make it simple, make it work, and then see how technology is changing and how you can adopt. Simplicity will not let you down. But a complex solution will always do.

    Read the article

  • Was API hooking done as needed for Stuxnet to work? I don't think so

    - by The Kaykay
    Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet, the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA(). This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA(). Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA(). Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Note: I had posted this question on the general stackexchange website and did not get appropriate responses that I was looking for so I'm posting it here.

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • How can i get my KVM switch to work? (win7 & ubuntu 10.10)

    - by Will W.
    i bought a KVM switch and i'm trying to use it to have it connected to my main PC (win7) and my new machine i just installed ubuntu on. I hooked it up properly, and tried using it. It worked when switching from the win7 machine to the ubuntu one, but after the (1st and only) successful switch, ubuntu just didn't seem to recognize my mouse or keyboard. Basically when i tried it the easiest was to explain what happened was it only worked with Win7. When i switched over to ubuntu by doing a [scroll-lock] [scroll-lock], my keyboard and mouse were not recognized. However, the lights on the keyboard and mouse did work when on ubuntu, but they didn't function, and since keyboard wouldn't function, i couldn't do a [scroll-lock] [scroll-lock] to switch back to the win7 machine. So i was basically locked in to ubuntu with no mouse or keyboard, and i had to unplug the keyboard/mouse usb's and d-sub to plug the monitor d-sub back into win7 computer to type up this thread and google the issue. Seems some people have had this issue before but i couldn't find a fix... I am 80% sure it has to do with drivers... but there isn't any for KVM switches, at least not this one also i never was unable to find ubuntu drivers/firmware for my mouse and keyboard (Logitech G15 and Razer Deathadder 3500). I don't know how to fix this, perhaps someone super-savvy could write/code a script or work-around or something? I really need to get this thing working, my back is getting sore from bending over and plugging in / unplugging usb/monitor/usb/monitor/usb/usb over and over again lol... and i really would be sad if the constant plugging unplugging of the usb's or the d-sub port would over time damage the ports... i don't want that... There has to be some way to get this working.. Can anyone help? The KVM is a IOGEAR GCS632U Win7 x64 Ubuntu 10.10

    Read the article

  • How does a segment-based rendering engine (as in Descent) work?

    - by Calmarius
    As far as I know Descent was one of the first games that featured a fully 3D environment, and it used a segment based rendering engine. Its levels are built from cubic segments (these cubes may be deformed as long as it remains convex and sides remain roughly flat). These cubes are connected by their sides. The connected sides are traversable (maybe doors or grids can be placed on these sides), while the unconnected sides are not traversable walls. So the game is played inside of this complex. Descent was software rendered and it had to be very fast, to be playable on those 10-100MHz processors of that age. Some latter levels of the game are huge and contain thousands of segments, but these levels are still rendered reasonably fast. So I think they tried to minimize the amount of cubes rendered somehow. How to choose which cubes to render for a given location? As far as I know they used a kind of portal rendering, but I couldn't find what was the technique used in this particular kind of engine. I think the fact that the levels are built from convex quadrilateral hexahedrons can be exploited.

    Read the article

  • Why won't my graphics work in Ubuntu 12.04 LTS?

    - by user170974
    I'm very new to Ubuntu and to Linux in general, and took the leap and formatted my PC to Ubuntu 12.04 LTS very recently :) I seem to be having some trouble getting my graphics card to run properly, I looked over what information I could find but I still cannot get it up and running and figured this was a good place to ask for help. The information I can find on my graphics is as follows: (Terminal command) lspci outputs: 01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS880M [Mobility Radeon HD 4225/4250] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Madison [Mobility Radeon HD 5650/5750 / 6530M/6550M] I tried using a mixture of the following links: How do I fix my installation of ATI Catalyst Video Driver in 12.04 LTS? What is the correct way to install ATI Catalyst Video Drivers (fglrx)? Ubuntu Precise Installation Guide But it does not seem to work, since running fglrxinfo in terminal gives: display: :0.0 screen: 0 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x301) OpenGL version string: 1.4 (2.1 Mesa 9.0.3) What am I doing wrong here? All help appreciated ;) Edit: I have tried the legacy driver from www2.ati.com/drivers/linux/amd-driver-installer-catalyst-13-4-linux-x86.x86_64.zip I also tried the guide at https://launchpad.net/~makson96/+archive/fglrx which caused the system to crash (blackscreen, no boot) Neither seemed to work. I did however reinstall ubuntu 12.04 LTS, and re-tried both with no success. Reintalling ubuntu did however fix the broken dependencies problems, etc.

    Read the article

  • Is there a NVIDIA driver (for a 7-series card) that will actually work for 12.10?

    - by DS13
    I see many similar topics on this, but I've tried all their suggestions, and nothing has worked. ISSUE: I do a clean install of Ubuntu 12.10. Boots fine with the “nouveau” graphics driver – graphics are very slow and choppy. The three other driver options in Ubuntu (official NVIDIA drivers), all result in a variation of the black screen on boot up. There will be NO access to a command line/GUI in anyway what-so-ever (tried every option recommended out there, but the system is unusable at this stage). I can only reinstall, and try different drivers…and I only ever get one shot at it. QUESTIONS: Does anyone know of a NVIDIA driver that will actually work with a Nvidia GeForce 7350LE? Or a 7-series card in general? This is my second computer, and I’m just trying to get a working install of Ubuntu on it. I don’t want to put much money into it, as I have seen Ubuntu run great on much older/less capable machines. I’ve got a decent Intel processor (2.3Ghz), 2GB of RAM, 320GB hard drive, 32-bit architecture, and there is no other O/S installed. It appears as if the graphics card is holding me back. Should I just buy a cheap graphics card (non-NVIDIA) to put in as a replacement? TRIED SO FAR: -all drivers available in Ubuntu *all fail -manual install of some different NVIDIA drivers *all fail -also tried installing the generic kernel, Nvidia driver doesn't work in 12.10 *no difference -every method suggested to at least get a command line after switching to a NVIDIA driver *all fail

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >