Search Results

Search found 611 results on 25 pages for 'bare'.

Page 11/25 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Why has ESXi 5.0 not used the software RAID configuration on my test server?

    - by kafka
    I've got a test server which was running WS 2008 Enterprise on the bare metal. It was correctly using the software RAID 1 configuration (2x250 GB disks which appeared as one disk), setup on the Dell Poweredge T110 (which meets compatibility requirements) without requiring any extra setup from me. (As an aside I'm fairly sure it's software RAID, as we didn't spec a hardware RAID controller, if that's of any importance in this situation). I am now testing installing ESXi 5.0 on this server to run some VMs. I've successfully installed ESXi, and imported a VM fine, but it's showing 2 x 250 GB disks available as datastores. However they should be appearing as one volume. When I boot the server, there is a RAID configuration screen you can enter, and I'm guessing this is what I'll have to do at some stage, but now need to be very careful because there is one disk which contains data that I want to be mirrored on the other disk. What is the best thing to do in this situation?

    Read the article

  • What kind of CPU/GPU integration is offered by APUs?

    - by clabacchio
    I'm truly fascinated by the idea of GPGPU and using the GPU for heavy processing. I'm seeing that also APUs (Accelerated Processing Units, CPU+GPU on the same chip) are gaining a consistent popularity. Are all of the APUs using a GPGPU? Can it be used for processing? And is it seamless or it requires special code (like Cuda) to have the hard work made by the GPU? I'm not interested in bare graphic performance, but more about how much the GPU can accelerate the "normal" CPU work.

    Read the article

  • How to approach scrum task burn down when tasks have multiple peoples involvement?

    - by AgileMan
    In my company, a single task can never be completed by one individual. There is going to be a separate person to QA and Code Review each task. What this means is that each individual will give their estimates, per task, as to how much time it will take to complete. The problem is, how should I approach burn down? If I aggregate the hours together, assume the following estimate: 10 hrs - Dev time 4 hrs - QA 4 hrs - Code Review. Task Estimate = 18hrs At the end of each day I ask that the task be updated with "how much time is left until it is done". However, each person generally just thinks about their part of it. Should they mark the effort remaining, and then ADD the effort estimates to that? How are you guys doing this? UPDATE To help clarify a few things, at my organization each Task within a story requires 3 people. Someone to develop the task. (do unit tests, ect...) A QA specialist to review task (they primarily do integration and regression tests) A Tech lead to do code review. I don't think there is a wrong way or a right way, but this is our way ... and that won't be changing. We work as a team to complete even the smallest level of a story whenever possible. You cannot actually test if something works until it is dev complete, and you cannot review the quality of the code either ... so the best you can do is split things up into small logical slices so that the bare minimum functionality can be tested and reviewed as early into the process as possible. My question to those that work this way would be how to burn down a "task" when they are setup this way. Unless a Task has it's own sub-tasks (which JIRA doesn't allow) ... I'm not sure the best way to accomplish tracking "what's left" on a daily basis.

    Read the article

  • Windows 7 [virtualized] resolutions in Macbook Pro Retina

    - by Trevor Sullivan
    So, I was considering picking up a Macbook Pro Retina, but then I realized that Apple forces you to scale the resolution, so you don't actually see the true benefits of the 2880x1800 display. Instead, you see upscaled, pixelated icons -- I saw this for myself in an Apple store a couple days ago. That's ok though, because the main reason I'd purchase one is to run Windows 7 on it, however I understand that the bootcamp drivers have not been updated to work with the MBP Retina. Instead, the option would be to run Windows 7 virtualized, but I haven't found any conclusive evidence to indicate whether the entire 2880x1800 resolution would act the same virtualize (VMware Fusion, VirtualBox, Parallels) as running Windows 7 natively. My question is: Does Windows 7 see the entire 2880x1800 virtualized, same as running it on bare metal (boot camp)?

    Read the article

  • Reducing CPU load to absolute minimum [on hold]

    - by user191338
    I have had a couple of things gone missing I believe stolen in my shared apartment and want to run my laptop constantly with a webcam attached, running webcam surveillance software to record/ take pictures when motion is sensed. Id like to take whatever steps are necessary to be able to run the laptop constantly without the fan coming on, as its quite loud and even though it will be hidden it can be heard. Thus Id like to know what steps I can take to reduce CPU to the bare minimum for the laptop to boot up and run the camera software and send images via ftp / email when necessary. I have windows 7 installed, though I can reinstall it clean. Which are the windows services can I turn off, and more extreme disabling or measures of whatever kind which I can take. The OS would need to run the camera, wifi / networking. Thanks very much for any help.

    Read the article

  • does windows incremental backup include system state backup?

    - by Kossel
    I'm managing my very small office server with windows server 2008. since I have only one server, and the user group is really small. I made the first hdd into 2 partitions. one (C:) for windows and Active directory, another (D:) for tomcat and database. I'm doing incremental back C: and D: daily to hdd2 (E:) using windows server backup. is it enough to let me do fully restore my server in case of disaster? I ask this because I have read there is also a system state backup, and I also have to do that periodically in order to get AD back? isn't it with incremental/full backup I can do full bare-metal recovery?

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • When is it ever ok to write your own development tools? (editor into IDE)

    - by mario
    So I'm foremost using a text editor for coding. It's a very bare bones editor; provides mostly just syntax highlighting. But on rare occasions I also need to debug something. And that's when I have to resort to an IDE (mostly Netbeans, but got fiddly Eclipse/Aptana working as second fallback). For general use however IDEs feel not workable to me. It's a visual thing, being used to console UIs etc. And switching back and forth between a text editor and an IDE is slightly cumbersome too. That's why I'm considering extending the editor, not really into a full-fledged IDE - but at the very least integrate a debug feature. Since I'm working on PHP, it seems not that much effort. The DBGp allows to externalize a debug handler from the editor, so it's just minor integration work and figuring out how to shoehorn a breakpoint feature into the editor (joe btw). And while I've also got time to do that, I'm wondering if this is really worthwhile. In this case it's not a needed development tool. It's just for convenience. And the cause for doing it is basically just not liking the existing solution. While over time I might extend and adapt this debugger thing, it initially will be as circumstantial as Eclipse. It inevitably starts out as poor development tool. Furthermore there is likely not much reuse. (Okay, this is not an important point. Most such software exists sans much of a use case. And also obviously, similar extensions already exist for emacs and vim, so it cannot be completely pointless.) But what's a general guideline on attempting to conoct custom development tools, particularily if they are not really needed but satisfy personal preferences? (Usability enhancement not certain.)

    Read the article

  • Ubuntu 12.04 resolution stuck on 640x480

    - by user212483
    I am new to Ubuntu and I was trying to get my HDMI enabled TV to work with my Ubuntu 12.04 computer and I installed a Nvidia driver using the "additional drivers" program. After that didn't work, I started playing around with the dual booted windows 7 on my computer. Now, I've never used that windows since I installed it so I was stripped down to bare minimum so I tried to adjust the resolution(as it was on lowest resolution) and tried to connect the HDMI, which didn't work. After that I came back to my Ubuntu installation only to find out that it is now stuck on 640x480 resolution. I tried to remove the driver that I installed again using the "additional drivers" program but that didn't help at all. The error that showed up was - Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Any help would be appreciated as this is very annoying. Thanks

    Read the article

  • Normalizing the direction to check if able to move

    - by spartan2417
    i have a a room with 4 walls along the x and z axis respectively. My player who is in first person (therefore the camera) should have collision detection with these walls. I'm relatively new to this so please bare with me. I believe the way to do this is to calculate the direction and distance to the wall from the camera and then normalize the directions. However i can only get this far before i dont know what to do. I think you should work out the angle and direction your facing? where _dx and _dz is the small buffer in front of the camera. float CalcDirection(float Cam_x, float Cam_z, float Wall_x, float Wall_z) { //Calculate direction and distance to obstacle. float ob_dirx = Cam_x + _dx - Wall_x; float ob_dirz = Cam_z + _dz - Wall_z; float ob_dist = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); //Normalise directions float ob_norm = sqrt(ob_dirx*ob_dirx + ob_dirz*ob_dirz); ob_dirx = (ob_dirx)/ob_norm; ob_dirz = (ob_dirz)/ob_norm; can anyone explain in laymen's terms how i work out the angle?

    Read the article

  • Are all SATA cables compatible with SATA 3?

    - by Jim Fell
    I have a HP Compaq de5700 Small Form Factor desktop computer, and I am looking to upgrade it's hard drive. When I open up the box, it clearly has available SATA connectors on the motherboard, but no indication as to which SATA version (1, 2, or 3). The hard drive I am considering is a SATA 3. My concern is that if the motherboard also supports SATA 3 and I use an old SATA cable (v1 or v2), might there be problems? This is a bare drive, so I don't expect that a cable will come with it, and I have not been able to find the manual for this machine. Thanks.

    Read the article

  • When is the right time to join open source project for programmer?

    - by Mahesh
    Most of the newcomers in programming start with basic projects to start with programming. Most of the C++ progammers spend some time with puzzles and contests but this is not always helpful. Sometimes you've to spend some time on real projects. Starting your own open source project could be a problem in self-learning for newbie cause of lack of mentors and peers who can't look at your code and give suggestions. Open source projects can solve this problem, some projects could be best suited for new programmers. Besides everybody is newbie at some point. So i'll try and make this question a bit from beginners perspective. I tried few questions on stack overflow before asking this like How do i join & Bare minimum you need and how to get involved with open source and what level of programming etc. But this is not helping me when it comes to self-evaluating with skills. How to find that out ? How can i check what it takes to join open source project and am i really that comfortable with huge source code etc. My question is when to consider yourself comfortable joining open source programming ? I mean how will you test yourself that you're ready to take burden of big/small projects of open source ? how will you test yourself to see if you could work with version control/other programmers/tight schedule etc ?

    Read the article

  • Why does my win7 back ground and theme settings reset to default after every reboot ?

    - by JubJub
    I have the new ASUS G73. Everything is perfect, but for unknown reasons to me after customizing the background and theme colors, rebooting resets all these back to default (the theme that originally came with the laptop) Is there a way to stop this madness? Is there some kind of configuration reset app that runs on start up for ASUS computers? This is my first ASUS so I don't know what to expect with the factory bloatware. No, I don't feel like re-installing bare-bones windows just to get rid of the bloatware. Everything works fine except for this one little thing :( Sniff.

    Read the article

  • When can I publish a software tool written at work?

    - by AlexMA
    I'm working on a software problem at work that is fairly generic, but I can't find a library I like to solve it, so I'm considering writing one myself (at least a bare-bones version). I'll be writing some if not all of the 1.0 version at work, since I need it for the project. If turns out well I might want to bring the work home and polish it up just for fun, and maybe release it as an open-source project. However, I'm concerned that if I wrote the 1.0 version at work I may not be allowed to do this from a legal sense. Obviously I could ask my boss (who probably won't care), but I'm curious how other programmers have dealt with this issue and where the law stands here. My one sentence question is, When is it okay (legally/ethically) to open-source a software tool originally written by you for work at work? What if you have expanded the original source significantly during off-hours? Follow-up: Suppose I write the whole thing at home on my time then simply use it at work, does that change things drastically? Follow-up 2: Note that I'm not trying to rip off my employer (I understand that they're paying me to build products that they own)--I'm just wondering if there's a fair way of doing this for all involved... It would be nice if some nonprofit down the road could use my code and save them some time. Also, there's another issue at stake. If I write the library for a very simple, generic thing (like HTML tables in Javascript), does that mean I can never again do so on my own time without putting myself at legal risk (even if it was a whole new fresh rewrite or a segment of a larger project). Am I surrendering my right to write code for this sort of project for the rest of my life (without this company's permission), since the code at work might still be somewhere in my brain influencing me? This seems related to software patents, as a side-note.

    Read the article

  • How Do Computers Work? [closed]

    - by Rob P.
    This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. But I Don't Know How Computers Work! Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)

    Read the article

  • DPM - Monitoring is green, Protection has error and Latest rec point is old. How do I interpret that?

    - by LosManos
    How do I read the DPM info in this case? Monitoring says Failed but Protection shows Ok while having a Latest recovery point from last year. Under Monitoring tab I have Failed for Source | Computer | Protection group | Start time Computer\System Protection | MyServerName | Recovery point | 2014-06-09 19:00:00 which shows me that something happened last night. But under Protection tab everything is green. Here I have Protection group member | | Protection status Protection group ..name.. Computer: MyServerName Computer\System protection Bare metal recovery OK ... Latest recovery point: 2013-12-12 06:32:54 My guess is that backup failed last night once, but succeeded later. It then found out that there hasn't been any change since sometime last year and leave it be and flags Ok.

    Read the article

  • Blocking popups and ads

    - by user74364
    I'm having a fight with ads, popups and tracking cookies. But i'm having some issues. Software used: Chromium 18.0.1025.168 Extensions used: Adblock Plus (Beta)1.2 AdBlock+ Element Hiding Helper1.1.9.18 Better Pop Up Blocker2.1.6 Ghostery3.0.0 With this configuration, i'm always getting this error: Warning: This extension failed to modify a network request because the modification conflicted with another extension. I know if i disable "better popup", this goes away. It's perfectly normal, due to those extensions trying to block the same things. Problem is, i can't live without all of them! Can anyone advise me about some good configuration? Can't live without adblock plus, because i hate ads. Betterpopup blocker is essential too (believe me, chrome doesn't block a lot of popups, and i have a website or 2 that can proove that.) And ghostery is a must... i can't bare the idea of being tracked all the time by some companies. So i'm kinda lost here! everything is needed, but they conflict with each other. i mean, it has to exist a perfect combination out there, i know i'm not the only one hating the privacy issues nowadays! really thankful for any tips guys

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Server installation logging / logbook / diary?

    - by The MYYN
    Are there some ways field-tested ways to keep a kind of logbook for a server? Including: software installations (and de-installations) custom configurations (e.g. of a webserver, ssh daemon, etc.) personal notes The big picture. I am preparing a server and would like to extensively document the state and how it was established over time, so that a new person can easily see, what's going on and why. The setup is not too complicated, but I would like to do it anyway. I once used something like Maintain /etc with mercurial on Debian and it was nice, but I am looking for a little more flexible solution. Addendum: So I am interested in logging and documentation first. In an ideal world however, I would like to have a command, which in a few steps would take me from a bare newly installed unix system to a functional environment with all the components setup and in place by the means of, say an 'executable' log. But that would be very ideal, I imagine.

    Read the article

  • How to make the Microsoft Word 2010 web install version completely download itself?

    - by Paperflyer
    I installed Microsoft Word 2010 using the handy web installation feature. The way this works is that Word installs a bare minimum of functionality and any time it needs a feature that has not been downloaded yet, it will download it in-place just as needed. The thing is, this is stalling Word every few minutes. Every few minutes, Word takes a small nap while it tries to find some obscure feature somewhere. This is really annoying. Is there a way to just tell the thing to completely download itself? This was a nice functionality for the trial but it is extremely annoying in actual use.

    Read the article

  • "SIOCSIFADDR: No such device" after restoring backup

    - by Paul Tomblin
    I bought some new hardware, and tried to restore my backup on it. When I boot, I don't get a network connection. If I type "ifup eth0" on the command line, I see the messages: SIOCSIFADDR: No such device eth0: No such device lspci shows an ethernet controller (Intel 82546GB). ifconfig does not show any controller except loopback. I tried installing bare Debian on the machine and the network worked then, but now I want to make it like my old machine was. Googling this problem only seems to find people having this problem in VMs. I'm not in a VM.

    Read the article

  • Private Git repo using Smart HTTP with LDAP authentification

    - by ALOToverflow
    I've been crawling the interwebz and getting my hands dirty for the last few days, but I can't seem to make it all work together. I managed to get a HTTP repo working with Ubuntu 10.04 over Smart HTTP (pull and push over HTTP) for a single repo. This means that I do the initial setup over SSH to the server (git init --bare) and after that the clients can pull and push to it (git clone http://servername/allgitrepos/repo.git). Unfortunately it's impossible to add a new repo without SSHing to the server and adding it manually) i.e. git push http://servername/allgitrepos/repo2.git (allgitrepos is available for everyone to read-write and execute) would fail talking about git update-server-info (which seems to be a general error message). So far the repository is anonymous, so I would like to authenticate using LDAP and also use the LDAP creds to make the git commit. So, how can I push new repos to the server and how can I use the LDAP creds to make the git commit. Thanks

    Read the article

  • Mount linux partition as Windows network share over internet

    - by CptEO
    I have a Linux server running RHEL 6. I have two Windows servers. All servers are connected directly to the web with an external IP, they are not in a local lan. What I would like to achieve is to setup the Linux server so that it offers a single share (the whole partition) that can be mounted as network drive within Windows. I don't want to use any 3rd party software to access the linux server because I want to use the linux server as a backup for Bare Metal Restore. In order to do so, I need to be able to access the linux partition from within the Windows Recovery Enviroment where I cannot install any 3rd party software. The linux server should only be accessible from given IP addresses (e.g. the 2 windows servers). Does anyone know if the setup I would like to have is possible?

    Read the article

  • git-receive-pack : command not found.

    - by Philippe Mongeau
    I made a git repo on a local machine with "git init --bare" and added it as the remote origin on the project on my main computer with ssh: git add remote origin [email protected]:repoName.git I was able to make a commit and push from my main computer to the other computer the day I created the repo, but today i tried and it didn't work. When I did "git push origin" it returned this error: bash: line 1: git-receive-pack: command not found fatal: The remote end hung up unexpectedly The two machines are mac the main one running Leopard and the server one running Tiger. I think it may be realted to the $PATH of git on the server but I'm not sure. i used theses instrution to create my git server: http://blog.commonthread.com/2008/4/14/setting-up-a-git-server

    Read the article

  • using modrewrite to change http to https? (not redirect)

    - by PaulHanak
    This might sound a little crazy, but bare with me. I basically have an include file, lets say inc-navigation.html, that has absolute paths (http://www.pathtoimage.com/image.com) that are on EVERY PAGE. Well, using SSL, I can't use that same include file because it is not referencing https:// What a pain! SO, I was maybe thinking of using htaccess to do a url rewrite of all references of HTTP to HTTPS when the browser requests a https page. Again, just to be clear, I don't want to "redirect", just "replace". So, I have this.... RewriteCond %{HTTPS} !=on RewriteRule ^http$ https but it doesn't seem to be working. I probably have the syntax wrong though. :) Of course, this is even if this type of thing is even possible!?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >