Search Results

Search found 2200 results on 88 pages for 'human factors'.

Page 64/88 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • Object Oriented Design of a Small Java Game

    - by user2733436
    This is the problem i am dealing with. I have to make a simple game of NIM. I am learning java using a book so far i have only coded programs that deal with 2 classes. This program would have about 4 classes i guess including the main class. My problem is i am having a difficult time designing classes how they will interact with each other. I really want to think and use a object oriented approach. So the first thing i did was design the Pile CLASS as it seemed the easiest and made the most sense to me in terms of what methods go in it. Here is what i have got down for the Pile Class so far. package Nim; import java.util.Random; public class Pile { private int initialSize; public Pile(){ } Random rand = new Random(); public void setPile(){ initialSize = (rand.nextInt(100-10)+10); } public void reducePile(int x){ initialSize = initialSize - x; } public int getPile(){ return initialSize; } public boolean hasStick(){ if(initialSize>0){ return true; } else { return false; } } } Now i need help in designing the Player Class. By that i mean i am not asking for anyone to write code for me as that defeats the purpose of learning i was just wondering how would i design the player class and what would go on it. My guess is that the player class would contain method for choosing move for computer and also receiving the move human user makes. Lastly i am guessing in the Game class i am guessing the turns would be handeled. I am really lost right now so i was wondering if someone can help me think through this problem it would be great. Starting with the player class would be appreciated. I know there are some solutions for this problem online but i refuse to look at because i want to develop my own approach to such problems and i am confident if i can get through this problem i can solve other problems. I apologize if this question is a bit poor but in specific i need help in designing the Player class.

    Read the article

  • How should game objects be aware of each other?

    - by Jefffrey
    I find it hard to find a way to organize game objects so that they are polymorphic but at the same time not polymorphic. Here's an example: assuming that we want all our objects to update() and draw(). In order to do that we need to define a base class GameObject which have those two virtual pure methods and let polymorphism kicks in: class World { private: std::vector<GameObject*> objects; public: // ... update() { for (auto& o : objects) o->update(); for (auto& o : objects) o->draw(window); } }; The update method is supposed to take care of whatever state the specific class object needs to update. The fact is that each objects needs to know about the world around them. For example: A mine needs to know if someone is colliding with it A soldier should know if another team's soldier is in proximity A zombie should know where the closest brain, within a radius, is For passive interactions (like the first one) I was thinking that the collision detection could delegate what to do in specific cases of collisions to the object itself with a on_collide(GameObject*). Most of the the other informations (like the other two examples) could just be queried by the game world passed to the update method. Now the world does not distinguish objects based on their type (it stores all object in a single polymorphic container), so what in fact it will return with an ideal world.entities_in(center, radius) is a container of GameObject*. But of course the soldier does not want to attack other soldiers from his team and a zombie doesn't case about other zombies. So we need to distinguish the behavior. A solution could be the following: void TeamASoldier::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<TeamBSoldier*>(e)) // shoot towards enemy } void Zombie::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<Human*>(e)) // go and eat brain } but of course the number of dynamic_cast<> per frame could be horribly high, and we all know how slow dynamic_cast can be. The same problem also applies to the on_collide(GameObject*) delegate that we discussed earlier. So what it the ideal way to organize the code so that objects can be aware of other objects and be able to ignore them or take actions based on their type?

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Redesigning an Information System - Part 1

    - by dbradley
    Through the next few weeks or months I'd like to run a small series of articles sharing my experiences from the largest of the project I've worked on and explore some of the real-world problems I've come across and how we went about solving them. I'm afraid I can't give too many specifics on the project right now as it's not yet complete so you'll have to forgive me for being a little abstract in places! To start with I'm going to run through a little of the background of the problem and the motivations to re-design from scratch. Then I'll work through the approaches taken to understanding the requirements, designing, implementing, testing and migrating to the new system. Motivations for Re-designing a Large Information System The system is one that's been in place for a number of years and was originally designed to do a significantly different one to what it's now being used for. This is mainly due to the product maturing as well as client requirements changing. As with most information systems this one can be defined in four main areas of functionality: Input – adding information to the system Storage – persisting information in an efficient, searchable structure Output – delivering the information to the client Control – management of the process There can be a variety of reasons to re-design an existing system; a few of our own turned out to be factors such as: Overall system reliability System response time Failure isolation and recovery Maintainability of code and information General extensibility to solve future problem Separation of business and product concerns New or improved features The factor that started the thought process was the desire to improve the way in which information was entered into the system. However, this alone was not the entire reason for deciding to redesign. Business Drivers Typically all software engineers would always prefer to do a project from scratch themselves. It generally means you don't have to deal with problems created by predecessors and you can create your own absolutely perfect solution. However, the reality of working within a business is that the bottom line comes down to return on investment. For a medium sized business such as mine there must be actual value able to be delivered within a reasonable timeframe for any work to be started. As a result, any long term project will generally take a lot of effort and consideration to be approved by those in charge and therefore it might be better to break down the project into more manageable chunks which allow more frequent deliverables and also value within a shorter timeframe. As the only thing of concern was the methods for inputting information, this is where we started with requirements gathering and design. However knowing that there might be more to the problem and not limiting your design decisions before the requirements is key to finding the best solutions.

    Read the article

  • Oracle EBS ????“????”???????

    - by Steve He(???)
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} Oracle E-Business ?????????????????????,???????????EBS?????????????????“????”,???????????????????????????????? ????????? notes ?????,?????????????????????,?notes??????????????????,????????????? note,????????  ???????????????????????????????,??????????????????????????????,“????”??????????????????????????,???????????????????????????? ?? EBS ????????“????”?????? Doc ID 1501724.1 ???? EBS “????”???? ??????Receivables Transactions?“????”: ??? "Entering / Updating Transactions"????,????????: ????? "Transaction numbers are not in sequence",????????????ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. EBS?????????“????” ?: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • error reading keytab file krb5.keytab

    - by Banjer
    I've noticed these kerberos keytab error messages on both SLES 11.2 and CentOS 6.3: sshd[31442]: pam_krb5[31442]: error reading keytab 'FILE: / etc/ krb5. keytab' /etc/krb5.keytab does not exist on our hosts, and from what I understand of the keytab file, we don't need it. Per this kerberos keytab introduction: A keytab is a file containing pairs of Kerberos principals and encrypted keys (these are derived from the Kerberos password). You can use this file to log into Kerberos without being prompted for a password. The most common personal use of keytab files is to allow scripts to authenticate to Kerberos without human interaction, or store a password in a plaintext file. This sounds like something we do not need and is perhaps better security-wise to not have it. How can I keep this error from popping up in our system logs? Here is my krb5.conf if its useful: banjer@myhost:~> cat /etc/krb5.conf # This file managed by Puppet # [libdefaults] default_tkt_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_tgs_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC preferred_enctypes = RC4-HMAC DES-CBC-MD5 DES-CBC-CRC default_realm = FOO.EXAMPLE.COM dns_lookup_kdc = true clockskew = 300 [logging] default = SYSLOG:NOTICE:DAEMON kdc = FILE:/var/log/kdc.log kadmind = FILE:/var/log/kadmind.log [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false retain_after_close = false minimum_uid = 0 debug = false banner = "Enter your current" } Let me know if you need to see any other configs. Thanks. EDIT This message shows up in /var/log/secure whenever a non-root user logs in via SSH or the console. It seems to only occur with password-based authentication. If I do a key-based ssh to a server, I don't see the error. If I log in with root, I do not see the error. Our Linux servers authenticate against Active Directory, so its a hearty mix of PAM, samba, kerberos, and winbind that is used to authenticate a user.

    Read the article

  • Synchronizing audio and video using MP4Box / ffmpeg to concatenate files

    - by jdl2003
    I have two H.264 encoded MPEG-4 files that I need to concatenate. I have been using MP4Box for this task by first ensuring the files are encoded identically (even went so far as to compare output from h264_parse on their video tracks) and then concatenating with this command: MP4Box -cat file1.mp4 -cat file2.mp4 output_file.mp4 This works and the output file is playable, but on playback in Quicktime or VLC the second video's audio starts too soon, making the entire second part of the concatenated file out of sync. I have tried reencoding the output through ffmpeg with -vcodec copy and -acodec copy but the sync issue persists. I have also tried converting to MPEG-2 format first, concatenating with a simple cat file1.mpg file2.mpg > output.mpg and reencoding the result with ffmpeg. This was even worse. I know that I can pass commands to MP4Box to adjust the start time of the audio track, but I am trying to do this programmatically for any input video (in the same encoding of course). I am looking for possible solutions that would not require human intervention / manual adjustments. Or, at least, an understanding of what is happening to make the second part of the concatenated video go out of sync.

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Is there a way to tell what the download speed is from a site/server

    - by Memor-X
    i'm looking into which ISP i should go with at the place i'm moving into, one ISP which i have been told good things about has data limits (which when breached will drop your speed to dial-up speed) but multiple memberships which, apart from the cheapest membership, have the same data limits (the cheapest has a 10GB data limit) in their fine print, they say that each different membership has different port speeds, one particular part jumps out at me These speeds are the NBN (National Broadband Network) port speed and not the actual Internet data speed which will vary based on numerous factors including destination you are reaching, your network equipment, network congestion etc. i plan to use the net to download DLC and patch updates for games (particular the insanely large update for the Wii U) and games from Steam (if i find any good one other than this one JRPG) and downloading development resources from free sites like Deposit Files and Mediafire since one membership with a 1000GB data limit is $145 with the port speed being 12Mbps/1Mbps (cheapest) while another with the same limit is $190 with the port speed of 100Mbps/40Mbps (expensive) i am wondering how i can tell what the speed coming from site is since i don't want to be wasting money on speed that makes no difference (unlike memory which i rather have to spare) NOTE: the speeds are for a fiber optic network which where my new place is can only connect via fixed wireless which i may not be able to get with this ISP but if i can get this network then good NOTE 2: most of the resources i get from Deposit Files are always about 200 MB or less, if a resource pack is greater then it's split into multiple archives (like .7z.part) while Mediafire i have to see one bigger than 150MB NOTE 3: one update patch for a PS3 game is close to 4 GB (Disgaea 4) which i need to get access to the DLC and on the weekend i downloaded 5 GB for the Final Fantasy XIV Open Beta for the PS3 which took almost 5 hours

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • How can I get Thunderbird to automatically move messages?

    - by David Heffernan
    I have Thunderbird 15. I'd like to automatically move messages from one folder to another. My mail account is an IMAP account. My Blackberry is also connected to the account and when it sends mail, it places a copy on the IMAP server in a folder named Sent Items. I'd like those messages to be moved to my Inbox automatically. By default message filters are only applied automatically to the Inbox. There is an extension to do this, Filter Subfolders, but it's only for TB3. What I have tried so far is: Use the FiltaQuilla add-on to be able to filter messages for folder name. Set the string property mail.server.default.applyIncomingFilters to true. As recommended here: http://blog.mozilla.org/bcrowder/ But I can't get these filters to run automatically. I have a suspicion that filters only run automatically for incoming mail. And these are sent items. Perhaps that's it. I just don't know. On the other hand, if I run the filters manually on that folder, it does indeed move the mail. Or perhaps the issue is that these messages are saved into the Sent Items folder marked as read. Is it possible that filters are only automatically applied to unread items? If I could install an add-in that automatically ran the message filter on my folder, that would do it. Anyway, I'm at a loss now. Any suggestions are welcome. I'm not at all wedded to using filters. I just want to find a way to get these messages moved without human interaction!

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • Logic behind SCCM 2012 required PXE deployments

    - by Omnomnomnom
    I'm in the process of setting up Windows 7 deployment through PXE boot, with Microsoft SCCM 2012. The imaging itself works very well, but I have a question about the logic behind PXE deployments. My setup is the following: My Windows 7 deployment task sequence is deployed to the unknown computers group. (not required, press F12 to start installing) OSDComputerName variable is also set on the unknown computers group, so unknown computers that are being imaged will prompt for a pc name. The computer then becomes known in SCCM and is added to the correct collection(s). But if I want to reïnstall windows on a known computer things are different: I can do a required deployment of the imaging task sequence to the collection of computers. Then windows installs through PXE, without any human interaction, keeping the original computer name. But because the initial deployment was not required, the "required PXE deployment" flag is not set. So as soon as I add a new computer to a collection with a required PXE deployment, it will start to reïnstall windows again. I can also deploy the imaging task sequence to the new unknown computers as required, so the flag gets set initially. But then it does not prompt for a computer name. (and it generates a name like MININT-xxx) Which is also sort of what I want. Because when i want to re-install a machine, I want it to install without interaction. How can I solve this?

    Read the article

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • ASUS N56VZ-S4209P Screen saver, Screen dimming,sleep/hibernation mode not working according to power management settings

    - by Alto Roos
    I have a problem with my ASUS N56VZ-S4209P(Win8, I7-3630QM) laptop. If I use the fn+f7 command to turn off the screen it turns on by itself after a few seconds (the time varies between 1-10 seconds). In addition to this my screensaver doesn't activate at all and the laptop doesn't dim its screen after X minutes or goes into sleep/hibernation mode as specified in the power options when plugged in or on battery. As far as my knowledge goes it's not caused by the mouse or track pad at all since I can unplug my mouse and disable the track pad but the problem persist. I presume this is caused by the same problem? Does anyone know of a fix for this problem? It would be greatly appreciated. . . . I seem to have found a solution to the problem related to the screen that turns itself on again. The Power4Gear application has a setting which enables you to turn a "Presentation mode" "On" and "Off", if that setting is "Off" then it doesn't turn the screen on without any human interaction. Thus that part of the problem is fixed.

    Read the article

  • Why would Windows use slower network interface despite route metrics?

    - by tim11g
    On my previous notebook, the Dell/Broadcom wireless adapter had an option to automatically disable wireless when a wired network is connected, so I never dealt with multiple active interfaces. My current system has an Intel wireless adapter, and they apparently haven't figured out how to turn it off when there is a wired connection. Unless I explicitly remember to disable wireless when docked, the connection is active. That shouldn't be a problem (in theory), since the route metric will cause traffic to go over the fastest network (as indicated by the lowest metric in the routing table). Apparently not - I'm running a backup and seeing the throughput at 25Mbps or so (which is consistent with 802.11g) when a perfectly good Gigabit Ethernet interface is also connected. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.104 10 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.109 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 Windows has correctly identified the Ethernet interface (.104) and assigned it the lower (preferred) metric. So the Ethernet interface should be used exclusively, right? Why is the Ethernet connection not being used? What other factors are involved? (This is with Windows 7 if it makes a difference)

    Read the article

  • How do you verify a restore?

    - by Nic
    What tool(s) would you use to verify that a restored file structure is whole and complete? My environment is a Windows Server 2008 file server. (We use tape for backup, but that is inconsequential.) I am specifically looking for a tool that will: Record the names of all files and folders below a specified directory Optionally calculate checksums of each file encountered Save this index in a human-readable format Compare the index against restored data and show differences Some background: I recently had to replace the disks in our file server. The upgrade was scheduled to start 36 hours after the most recent full backup, so I created a differential backup. However, it turns out that one of our applications was clearing the archive bit on files saved to the server, so these were not included in the differential backup. I was unaware of this until my users reported some files as missing. Aside from this, are there any other common methods for validating the integrity of a restore? I am frequently told that testing backups by restoring them is the only way to know that backups are working, but how do you deal with the case where it works 99% correctly and the other 1% silently fails?

    Read the article

  • Deploying workstations - best practices?

    - by V. Romanov
    Hi guys I've been researching on the subject of workstation deployment for a while, and found a ton of info and dozens different methods and tools, but no "best practice" method that doesn't lack at least one feature that i consider required for the solution to be perfect. I'm currently interested in windows workstation deployment, but if the tools can be extended to Linux, then it's an added value. I want the deployment tools I use to be able to do the following: hardware independent - I want my image or installation to have a minimum of hardware and driver dependency, so that i can use a single image/package for all workstations easily updatable - I want to be able to update my image as easily as possible without redeploying/rebuilding/reimaging all configurations PXE bootable deployment - I want the tools to be bootable off the network so that I don't need a boot cd/DOK. scriptable for minimum human input - Ideally, the tool should run automatically after being booted and perform a "default" deployment (including partitioning) unless prompted otherwise. i.e - take a pc, hook it up, power on, PXE boot and forget about it until the OS is deployed. I found no single product or environment that does all this. Closest i came to is the windows deployment services/WIM image format. I also checked out numerous imaging and deployment tools including clonezilla, ghost, g4u, wpkg and others, but most of them lack the hardware Independence and updatability features. We currently have a Symantec Ghost server setup that does imaging over the network, but I'm not satisfied with it as it has all the drawbacks i listed above. Do you have suggestions how to optimize the process of workstation deployment? How do you deploy them in your organization? Thanks! Vadim.

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

  • SQL Server 2008 R2 mirroring failing

    - by andriusn
    I have two Windows 2008 R2 (Amazon EC2) instances running SQL Server 2008 R2. I use 9TB striped disks (9x1TB EBS volumes) for storage. One server is running as principal and second one as mirror. Both started from the same image, database and tlog files located on striped disk. Mirror server failed 3 times in last 2 months with errors: EventID 823 The operating system returned error 2(The system cannot find the file specified.) to SQL Server during a write at offset 0x00000048058a00 in file 'D:\TLogs***_log.ldf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. and EventID 1454 Database mirroring will be suspended. Server instance 'xxxxxxxxxx' encountered error 823, state 6, severity 24 when it was acting as a mirroring partner for database '***'. The database mirroring partners might try to recover automatically from the error and resume the mirroring session. For more information, view the error log for additional error messages. followed by EventID 19019 The MSSQLSERVER service terminated unexpectedly. After this rebooting instance is necessary to restore mirroring. First two times I thought it was hardware related (striped disk failure) and relaunched instance on new hardware. But the issue is back after few weeks again. It only affects mirror instances. Any help would be really appreciated. Thanks.

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • Value of Itanium over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium hardware, are there any drawbacks? Is there a point where Itanium starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the two platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer.

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >