Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 285/590 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Timesync on HyperV with CentOS 6.2

    - by WaldenL
    I've got a CentOS VM (release 6.2) running under HyperV. I have integration services installed (part of base now), and CentOS shows the current clocksource is hyperv_clocksource, however my time in the VM is about 10 minutes fast after a week of uptime. My understanding of the new IC and plugable clocksource is that this shouldn't happen any more. Is there any additional configuration necessary to get the plugable clocksource to "work?" I know there are plenty of links about setting kernel options to PIT and various stuff like that, but those all seem to pre-date the integrated clocksource support, and as I understand it shouldn't be needed anylonger. Nor should ntpd nor adjtimex.

    Read the article

  • Sharing an external hard drive in Ubuntu using Samba

    - by cambraca
    /media/MYDISK is where my hard drive is mounted automatically. I created a symlink using: ln -s /media/MYDISK /home/camilo/MYDISK chmod 777 /home/camilo/MYDISK I'm setting up smb.conf like this: [myshare1] comment = external disk browsable = yes path = /home/camilo/MYDISK guest ok = yes read only = no create mask = 0775 Also, in the [global] section I tried adding the following lines: follow symlinks = yes wide links = yes unix extensions = no The problem is that when browsing the shared folder in Windows 7, I get a "\\etc\myshare1 is not accessible" error. When pointing the path to a regular folder it works fine. Also, when I point it directly to /media/MYDISK, it shows the same error. EDIT: to make it more interesting, I have no graphical interface, so I need to touch the config files directly..

    Read the article

  • 1000 HZ linux kernel necessary if I have tickless and high resolution timer?

    - by Bob
    I am trying to improve performance on my server. I have a few processes that need low jitter (less than 10ms variance). I have a load average of 4 maximum on an i7-920 (4 physical cores, 8 with HT). There are about 10 processes ranging from 40% to 90% of a core user mode. System usage is 3% total. Total CPU usage is 80% max. Will setting the kernel from 100hz to 1000hz improve the jitter if tickless and high resolution timers are already set? This page seems to indicate it still does something. https://lkml.org/lkml/2009/4/28/401 How about changing from voluntary (PREEMPT_VOLUNTARY) to preemptible (PREEMPT)?

    Read the article

  • TeamViewer - only allow domain logins

    - by BloodyIron
    I recently started a Systems Admin job where teamviewer is used pretty frequently here. Another admin recently left, and the concern is they still have access to all our systems due to how teamviewer works. I want to migrate the entire environment to domain authentication. The documentation shows that setting up windows auth (domain) is easy, but I want to be sure that it is the only way to be authenticated with a teamviewer session here. I cannot yet find anything which explicitly says this. We have licensing for teamviewer 5 and 6, I think. Right now we have 7 in the environment, but I think most are in a trial version, so I am likely to revert to 5 or 6.

    Read the article

  • Nautilus file share for multiple users is not working. Only owner gets access.

    - by Niklas
    I have always had trouble setting up samba shares with ubuntu. In the past I've tried getting it to work by configuring /etc/samba/smb.conf but never achieved what I wanted. Last time I managed to get it working by making a share with nautilus built in file sharing (which utilises samba). Now when I try do it again I doesn't work. (running ubuntu 10.10 Desktop x64) What I'm trying to achieve is a share which is available for multiple users (those who are in the same group) and not just the owner (who also is included in the group). As it is now I can connect with only the owner, the others are getting an error when I try to connect with windows 7. All the users are within the same group and the folder permissions are 770. The files and folders have the correct group settings. I think there is no restrictions in the User Settings for the other users blocking them and I marked "make available to other users (or whatever it says)" in the file sharing dialog. What can I do?

    Read the article

  • My boss is feuding with his boss. My workload is expanding What should I do?

    - by steve
    These two have always had a somewhat shaky relationship when they were on the same level. The other guy was recently promoted to director and now my boss reports to him. On the surface, they appear to get along when they get together, but my boss despises the man and badmouths him every chance that he gets (to peers, subordinates, etc). He believe that the director is setting him up to fail. The Director and upper management is holding my boss responsible for the not-so-great performance by the team as of late. He's been playing games to make my boss look bad. Due to lay offs, we don't have the manpower to deliever the results that we did before...but expectations have not lowered...and my boss is taking the heat for it. Now he's on the warpath and starting to micromanage. He's giving everyone more work. He's forcing us midlevel guys to take responsibility for the level one techs' performance. I'm spending less and less time coding....and more time babysitting vendors, techs, etc. I'm not so sure that's a bad thing because I'm sorta burnt out on coding, but I don't really care for the idea of having to be responsible for others poor performance....isn't that the manager's job? Anyway, do you guys have any suggestions on dealing with the situation?

    Read the article

  • Source-control your BI Publisher reports

    - by Dmitry Nefedkin
    Version control systems (VCS) like Subversion, Git and the others has been widely adopted and became the must-have tool in any software development project. Source artifacts and checked out, modified, checked in, all the history of changes is tracked by the VCS.  But what if the development tool stores the source/configuration artifacts not in your laptop's hard drive, but in some shared repository instead? Well, we definitely need a way for export/import our artifacts from/to this repository.   Oracle BI Publisher report development approach is based on such a shared repository model (catalog), and starting from BI Publisher 11.1.1.5 Oracle ships Catalog Utility, which can be utilized to export/import the reports from the command line.  To start using the BI Publisher Catalog Utility you should: Go to the file system of the server where BI Publisher binaries has been installed and locate the following file: <MW_HOME>/Oracle_BI1/clients/bipublisher/BIPCatalogUtil.zip Copy the file to your local filesystem and unzip it. I will refer to this unzipped directory as <BIP_CLIENT_DIR> below If you do not want to pass server BI Publisher server URL, username and password during each invocation, modify the corresponding values inside <BIP_CLIENT_DIR>/config/xmlp-client-config.xml Open the terminal window and go to <BIP_CLIENT_DIR>/bin Make sure that the following environment variables are set: JAVA_HOME, ORACLE_HOME Now it's time to run the utility: if you are on Linux - just run BIPCatalogUtil.sh and pass the parameters according to the utility documentation if you are on MS Windows the bad news are that the command script for MS Windows is missing, and support.oracle.com note 1333726.1 says that a temporary solution is "create a .cmd file by setting up a classpath and copying the same commands from the .sh script". The good news are that I've created this script already,  please download the it from GitHub Hope you will find this utility useful during you day-by-day BI Publisher development. 

    Read the article

  • Hidden files in Nautilus after extracting ISO

    - by Luis Alvarado
    I need to first point to the image below to explain a bit about what I find weird here: I extracted the informacion from an ISO I from Nautilus I could only see two folders but from the terminal I can see the rest of the files and folders. This folders do not have the . character in from of them to hide them from plain sight. When I try to "Show hidden files" in Nautilus, Nautilus closes itself. It does not show the hidden folders or files. Somehow they are hidden without using the normal dot in the beginning of the name. The have my user permission but no way of seeing them from Nautilus. I can interact with them but the fact that they appear hidden when I can see them inside the ISO and after extracting them they disappear is what confuses me. What permission or setting makes this folders appear hidden and does not let Nautilus show them and like I said before, trying to show them with the "Show hidden files" option crashes Nautilus and exits it. Forcing me to have to open Nautilus again from the Launcher.

    Read the article

  • Hibernate a user account when switching to different account in Windows 7 Home Premium (64bit)

    - by Sukotto
    Is there any way to have Windows 7 hibernate Bob's account when switched to Mary's account and vice versa? I.e.: Bob is logged in Bob clicks Start shutdown switch user Bob's session is saved to disk Mary logs in Mary's session is restored as it was when Bob's turn started Both are heavy users (30+ chrome tabs open, multiple documents, multiple spreadsheets, music playing, etc) I would like to set up the system so that each gets the full use of the computer while still having all their open apps the way they left them. I suppose I could try setting up a VM for each, but I'd rather not add anything else to the mix here if I don't have to. This is Windows 7 Home Premium 64-bit running on a Lenovo G550 laptop

    Read the article

  • PXE Boot PCLinuxOS ISO

    - by DBNotCooper
    I'm in the process of trying to convert some computers at my local school to be diskless browser stations. We've identified PCLinuxOS as the OS we'd like to use due to it's easy interface for creating custom ISO images (we need WINE and some custom apps installed also, as well as FireFox). I've been having problems figuring out how to get an ISO to boot via PXE. In our network, I only have access to TFTP and HTTP, so I cannot use NFS. The machines all have enough memory (4 gigs) that they could use a ram drive to hold the ISO image, if that helps. Currently I've been looking at GPXE with GRUB/MEMDISK, but I don't know if that's the right solution, or even where a good resource is for setting it up. Searching the web has proved fruitless, as most of the information is either NFS-specific or out of date. The other students and I would appreciate any help! :)

    Read the article

  • Acer Ferrari 3000 in portrait display mode

    - by Riri
    I have a Acer Ferrari 3000 noetbook that I like to connnect to a external dispaly and show it in portrait mode. The computer runs a ATI Radeon 9200 graphic chipset and I can't find the setting for portrait and start believe the graphics card actually doesn't support it? I've looked at the latest drivers and can see that this has changed. What are my options? Can I buy some sort of external graphics thingy or other possible solutions will get up votes! ;)

    Read the article

  • Error starting Hyper-V VM

    - by Peter Bernier
    I'm trying to start a VM on a new Hyper-V installation and I'm receiving the following error: The virtual machine could not be started because the hypervisor is not running. The following actions may help you resolve the problem: 1) Verify that the processor of the physical computer has a supported version of hardware-assisted virtualization. 2) Verify that hardware-assisted virtualization and hardware-assisted data execution protection are enabled in the BIOS of the physical computer. (If you edit the BIOS to enable either setting, you must turn off the power to the physical computer and then turn it back on. Resetting the physical computer is not sufficient.) 3) If you have made changes to the Boot Configuration Data store, review these changes to ensure that the hypervisor is configured to launch automatically. My machine supports virtualization at the hardware level and it is enabled in BIOS. Why am I receiving this error?

    Read the article

  • Supermicro SmartOS setup issue with HDDs

    - by Andrew B.
    I'm trying to setup SmartOS on my new Supermicro server (SYS-6027R-N3RF4+). I have 8 HDDs + 2 SSDs installed in the hot-swappable drive bays. I have not configured the Intel RAID controller on the machine (it shows no RAID configured, and lists all 8 HDDs). When I boot SmartOS from my USB key for the first time and get to the initial zpool creation I only see 2 disks listed. How can I tell which two drives are being referenced? How can I get SmartOS to recognize all 8+2 drives? Is there a BIOS setting I need to adjust?

    Read the article

  • Why are we as an industry not more technically critical of our peers? [closed]

    - by Jarrod Roberson
    For example: I still see people in 2011 writing blog posts and tutorials that promote setting the Java CLASSPATH at the OS environment level. I see people writing C and C++ tutorials dated 2009 and newer and the first lines of code are void main(). These are examples, I am not looking for specific answers to the above questions, but to why the culture of accepting sub-par knowledge in the industry is so rampant. I see people posting these same type of empirically wrong suggestions as answers on www.stackoverflow.com and they get lots of up votes and practically no down votes! The ones that get lots of down votes are usually from answering a question that wasn't asked because of lack of reading for comprehension skills, and not incorrect answers per se. Is our industry that ignorant as a whole, I can understand the internet in general being lazy, apathetic and un-informed but our industry should be more on top of things like this and way more critical of people that are promoting bad habits and out-dated techniques and information. If we are really an engineering discipline, why aren't people held to a higher standard as they are in other engineering disciplines? I want to know why people accept bad advice, poor practices as the norm and are not more critical of their peers in the software industry.?

    Read the article

  • Why do the order of uniforms gets changed by the compiler?

    - by Aybe
    I have the following shader, everything works fine when setting the value of one of the matrices but I've discovered that getting a value back is incorrect for View and Projection, they are in reverse order. #version 430 precision highp float; layout (location = 0) uniform mat4 Model; layout (location = 1) uniform mat4 View; layout (location = 2) uniform mat4 Projection; layout (location = 0) in vec3 in_position; layout (location = 1) in vec4 in_color; out vec4 out_color; void main(void) { gl_Position = Projection * View * Model * vec4(in_position, 1.0); out_color = in_color; } When querying their location they are effectively reversed, I did a small test by renaming View to Piew which puts it before Projection if sorted alphabetically and the order is correct. Now if I do remove layout (location = ...) from the uniforms, the problem disappears !? I am starting to think that this is a driver bug as explained in the wiki. Do you know why the order of the uniforms is changed whenever the shader is compiled ? (using an AMD HD7850)

    Read the article

  • Openstack keystone issue

    - by kevin
    i am having an issue with keystone,keystone is configured with users nova,glance and admin user and their endpoints are also defined. when performing keystone token-get it is showing token,but for commands like keystone user-list its showing No handlers could be found for logger "keystoneclient.client" Unable to communicate with identity service: 404 Not Found The resource could not be found. . (HTTP 404) but after setting these env variables it worked export SERVICE_ENDPOINT=http://192.168.10.15:35357/v2.0 export SERVICE_TOKEN=token but after that for keystone token-get its showing 'Client' object has no attribute 'service_catalog' Why is it so?How can it be fixed any ideas

    Read the article

  • Strange behavior on Gnome after update on from 13.04 to 13.10

    - by WayneBrady
    I made an automatic update on my Ubuntu 13.10 (from 13.04) system today. Since this point of time, I am in really big trouble. I use a VNC server with Gnome classic, after the update my Gnome was gone. So i tried everything. Checked the xstartup file of vncserver. Right now I reached a point where I can't find the answer. The logfile says that gnome-session-fallback is missing, even directly after I installed it with apt-get (tried it serveral times, installing, uninstalling and so on). I have no chance to use it as you can see in this terminal copy: root@ip-xxx:~/.vnc# apt-get install gnome-session-fallback Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: gnome-session-fallback 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2,914 B of archives. After this operation, 247 kB of additional disk space will be used. Selecting previously unselected package gnome-session-fallback. (Reading database ... 210977 files and directories currently installed.) Unpacking gnome-session-fallback (from .../gnome-session-fallback_1%3a3.6.2-0ubuntu15_all.deb) ... Setting up gnome-session-fallback (1:3.6.2-0ubuntu15) ... root@ip-xxx:~/.vnc# gnome-session-fallback The program 'gnome-session-fallback' is currently not installed. You can install it by typing: apt-get install gnome-session-fallback If you have some idea, please give me a hint... Thank you!

    Read the article

  • Not working WEP key in Ubuntu

    - by stupid-idiot
    This time I've got some problem with connection to the internet from my old laptop with ubuntu, I already had some problems with setting up the radio on my network card but I think it works now since it finds some wireless networks. The problem is that I can't connect to my router via wi-fi even if I'm hundred-percently sure the wepkey I punch there is correct, it works via LAN though. Also, I'm not able to ping myself with 127.0.0.1. I'm sorry to say it but I don't think I can stand this shitness for much longer. Although it's much much faster than Windows it's useless since i can't download any libraries for development. If anyone has any experience with this, even indirect, please leave your answer or comment thx

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Installing PHP with iconv on Mac OSX 10.6 gives error: "Undefined symbols for architecture x86_64"

    - by Jason
    I am setting up PHP on my local web server with iconv, which should be there by default, but I am getting the following error: Undefined symbols for architecture x86_64: "_iconv", referenced from: __php_iconv_strlen in iconv.o _php_iconv_string in iconv.o __php_iconv_strpos in iconv.o __php_iconv_appendl in iconv.o _zif_iconv_substr in iconv.o _zif_iconv_mime_encode in iconv.o _php_iconv_stream_filter_append_bucket in iconv.o ... (maybe you meant: _zif_iconv_strlen, _zif_iconv_strpos , _zif_iconv_mime_decode , _php_iconv_string , _zif_iconv_set_encoding , _zif_iconv_get_encoding , _zif_iconv_mime_decode_headers , _iconv_functions , _zif_iconv_mime_encode , _zif_iconv_strrpos , _iconv_module_entry , _iconv_globals , _zif_iconv_substr , _php_if_iconv , _zif_ob_iconv_handler ) ld: symbol(s) not found for architecture x86_64 Now, presumably this means that my iconv is not 64 bit. I tried to download libiconv and manually reinstall it with 64 bit, as such: CFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CCFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' CXXFLAGS='-arch i386 -arch ppc -arch ppc64 -arch x86_64' ./configure But I get the following error: configure: error: in `/Users/jason/Downloads/libiconv-1.13.1': configure: error: C compiler cannot create executables See `config.log' for more details. In my config.lob file, it says at the bottom: configure: exit 77 Any suggestions?

    Read the article

  • Audio recording background noise

    - by Sergey
    For a long time time I've been trying to get rid of the background noise that appears in every audio recording I make with my computer. Tried different microphones, different sound setting, drivers. Interesting fact - the volume of this sound is equal whether I use and internal notebook microphone or an external one. I tried really good mics, so I'm sure the problem is not in it. Laptop on which this problem appears in HP Probook 4720. OS - Windows 7. P.S. Read an answer to a similar question: Annoying sound from microphone in headphone. Tried everything that was mentioned there. Only I don't have a "DC Offset Cancellation option". And when I disable "Noise Suppression" and "Acoustic Echo Cancellation", noise only becomes more noticeable. What should I do? EDIT: Example of the background noise I'm describing: http://eos-soft.com/files/noise.wma.

    Read the article

  • How to give virtual machine access to the Internet, but block from LAN?

    - by Pekka
    I am setting up a virtual machine using Microsoft Virtual PC in Windows 7. The VM will run a Windows XP. I want to set up a public-facing server in it for web pages, subversion and other things, and instruct the router to port forward any requests to that Virtual Machine. I managed to do that - I assigned the VM to the network adapter, and it is now acting as just another DHCP client - but to increase security I would like to block the VM from the rest of the LAN, so it accepts only incoming connections from the Internet. For this to be effective in case of a compromise, it would have to happen on VM level as far as I can see. Can this be done?

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • nginx 1.2.3 installed but remains at 1.1.19

    - by Nyxynyxx
    I've installed nginx 1.2.3 by adding a new ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update sudo apt-get install nginx However, nginx -v still gives me 1.1.19. What happened? Output The following packages will be upgraded: nginx 1 upgraded, 0 newly installed, 0 to remove and 46 not upgraded. Need to get 61.8 kB of archives. After this operation, 3,072 B of additional disk space will be used. Get:1 http://ppa.launchpad.net/nginx/stable/ubuntu/ precise/main nginx all 1.2.3-0ubuntu0ppa3~precise [61.8 kB] Fetched 61.8 kB in 0s (89.7 kB/s) (Reading database ... 79914 files and directories currently installed.) Preparing to replace nginx 1.1.19-1 (using .../nginx_1.2.3-0ubuntu0ppa3~precise_all.deb) ... Unpacking replacement nginx ... Setting up nginx (1.2.3-0ubuntu0ppa3~precise) ... root@precise64:/var/www/apadment# nginx -v nginx version: nginx/1.1.19

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >