Search Results

Search found 68249 results on 2730 pages for 'sudo work'.

Page 193/2730 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • (How) does deleting open files on Linux and a FAT file system work?

    - by lxgr
    It's clear to me how deleting open files works on filesystems that use inodes - unlink() just decreases the link count to zero, and when the last file handle to the file is closed, the inode will be removed. But how does it work when using a file system that doesn't use inodes, like FAT32, with Linux? Some experiments suggest that deleting open files is still possible (unlike on Windows, where the unlink call wouldn't succeed), but what happens when the file system is uncleanly unmounted? How does Linux mark the files as unlinked, when the file system itself doesn't support such an operation? Is the directory entry just deleted, but retained in memory (that would guarantee deletion after unmounting in any case, but would leave the file system in an inconsistent state), or will the deletion only be marked in memory, and written at the time the last file handle is closed, avoiding possible corruption, but restoring the deleted files after an unclean unmount?

    Read the article

  • How can I watch full size video on 1 monitor and work on another?

    - by jasondavis
    Here is my situation. I have 3 monitors hooked up to my new PC running off 2 video cards. When I watch a video on one of the monitors and make it go full screen on that monitor it is great, however as soon as I click anywhere on one of the other 2 monitors, it makes me lose the full screen mode of the video and makes it go back to it's original size. This happens when watching a flash or silverlight based video in Google chrome as well as when I watch video from a player such as iTunes. Is it possible to make a video play fullscreen on one of my monitors and still work in the other two screens without loosing my full screen mode on the one monitor?

    Read the article

  • How does Visual Studio's source control integration work with Perforce?

    - by Weeble
    We're using Perforce and Visual Studio. Whenever we create a branch, some projects will not be bound to source control unless we use "Open from Source Control", but other projects work regardless. From my investigations, I know some of the things involved: In our .csproj files, there are these settings: <SccProjectName <SccLocalPath <SccAuxPath <SccProvider Sometimes they are all set to "SAK", sometimes not. It seems things are more likely to work if these say "SAK". In our .sln file, there are settings for many of the projects: SccLocalPath# SccProjectFilePathRelativizedFromConnection# SccProjectUniqueName# (The # is a number that identifies each project.) SccLocalPath is a path relative to the solution file. Often it is ".", sometimes it is the folder that the project is in, and sometimes it is ".." or "..\..", and it seems to be bad for it to point to a folder above the solution folder. The relativized one is a path from that folder to the project file. It will be missing entirely if SccLocalPath points to the project's folder. If the SccLocalPath has ".." in it, this path might include folder names that are not the same between branches, which I think causes problems. So, to finally get to the specifics I'd like to know: What happens when you do "Change source control" and bind projects? How does Visual Studio decide what to put in the project and solution files? What happens when you do "Open from source control"? What's this "connection" folder that SccLocalPath and SccProjectFilePathRelativizedFromConnection refer to? How does Visual Studio/Perforce pick it? Is there some recommended way to make the source control bindings continue to work even when you create a new branch of the solution?

    Read the article

  • Firefox 3.6.3 on Snow Leopard 10.6.3 - symbolic link to command line binary doesn't work?

    - by David Watson
    I have Firefox 10.6.3 installed on Mac OS X Snow Leopard from the DMG. I can run firefox from the terminal using /Applications/Firefox.app/Contents/MacOS/firefox-bin. However, if I create a symbolic link: sudo ln -s /Applications/Firefox.app/Contents/MacOS/firefox-bin /bin/firefox then it refuses to run, or at least display. When I issue "firefox" from the terminal, I can see the process in top, but never get the GUI to appear. :/ = ls -lr /bin/firefox lrwxr-xr-x 1 root wheel 52 May 5 15:19 /bin/firefox - /Applications/Firefox.app/Contents/MacOS/firefox-bin Any ideas? Thanks, David

    Read the article

  • Trying to install Team viewer on Ubuntu 12.04

    - by Teknikk
    I recently got Ubuntu installed on my server, I wanted to install TeamViewer so i could easy manage the virtual machines, However, I get errors when installing it from App store?, And I also get errors, but more detailed on the terminal. Error output: tek@tek-G53SW:~/Download$ sudo dpkg -i ipts teamviewer_linux_x64.deb dpkg: error processing ipts (--install): cannot access archive: No such file or directory (Reading database ... 142115 files and directories currently installed.) Preparing to replace teamviewer7 7.0.9360 (using teamviewer_linux_x64.deb) ... Unpacking replacement teamviewer7 ... dpkg: dependency problems prevent configuration of teamviewer7: teamviewer7 depends on libc6-i386 (>= 2.7); however: Package libc6-i386 is not installed. teamviewer7 depends on lib32asound2; however: Package lib32asound2 is not installed. teamviewer7 depends on lib32z1; however: Package lib32z1 is not installed. teamviewer7 depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing teamviewer7 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: ipts teamviewer7 I tried to install it manually, but with no luck, I heard some others has this problems. I am running Ubuntu 12.04 x64. Error @ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs : tek@tek-G53SW:~/Download$ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tek@tek-G53SW:~/Download$ More errors tek@tek-G53SW:~/Download$ sudo apt-get -f install [sudo] password for tek: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 142441 files and directories currently installed.) Removing teamviewer7 ... tek@tek-G53SW:~/Download$ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done lib32z1 is already the newest version. libc6-i386 is already the newest version. lib32asound2 is already the newest version. Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. tek@tek-G53SW:~/Download$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: gstreamer0.10-plugins-good:i386 but it is not going to be installed Depends: gtk2-engines:i386 but it is not going to be installed Depends: gtk2-engines-murrine:i386 but it is not going to be installed Depends: gtk2-engines-pixbuf:i386 but it is not going to be installed Depends: gtk2-engines-oxygen:i386 but it is not going to be installed Depends: ibus-gtk:i386 but it is not going to be installed Depends: libcanberra-gtk-module:i386 but it is not going to be installed Depends: libcups2:i386 but it is not going to be installed Depends: libcupsimage2:i386 but it is not going to be installed Depends: libfontconfig1:i386 but it is not going to be installed Depends: libgail-common:i386 but it is not going to be installed Depends: libgphoto2-2:i386 but it is not going to be installed Depends: libgtk2.0-0:i386 but it is not going to be installed Depends: libnss3:i386 but it is not going to be installed Depends: libqt4-opengl:i386 but it is not going to be installed Depends: libqt4-qt3support:i386 but it is not going to be installed Depends: libqt4-scripttools:i386 but it is not going to be installed Depends: libqt4-svg:i386 but it is not going to be installed Depends: libqtgui4:i386 but it is not going to be installed Depends: libqtwebkit4:i386 but it is not going to be installed Depends: librsvg2-common:i386 but it is not going to be installed Depends: libsane:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. tek@tek-G53SW:~/Download$

    Read the article

  • Why doesn't Perl file glob() work outside of a loop in scalar context?

    - by Rob
    According to the Perl documentation on file globbing, the <*> operator or glob() function, when used in a scalar context, should iterate through the list of files matching the specified pattern, returning the next file name each time it is called or undef when there are no more files. But, the iterating process only seems to work from within a loop. If it isn't in a loop, then it seems to start over immediately before all values have been read. From the Perl docs: In scalar context, glob iterates through such filename expansions, returning undef when the list is exhausted. http://perldoc.perl.org/functions/glob.html However, in scalar context the operator returns the next value each time it's called, or undef when the list has run out. http://perldoc.perl.org/perlop.html#I/O-Operators Example code: use warnings; use strict; my $filename; # in scalar context, <*> should return the next file name # each time it is called or undef when the list has run out $filename = <*>; print "$filename\n"; $filename = <*>; # doesn't work as documented, starts over and print "$filename\n"; # always returns the same file name $filename = <*>; print "$filename\n"; print "\n"; print "$filename\n" while $filename = <*>; # works in a loop, returns next file # each time it is called In a directory with 3 files...file1.txt, file2.txt, and file3.txt, the above code will output: file1.txt file1.txt file1.txt file1.txt file2.txt file3.txt Note: The actual perl script should be outside the test directory, or you will see the file name of the script in the output as well. Am I doing something wrong here, or is this how it is supposed to work?

    Read the article

  • Is it customary for software companies to forbid code authors from taking credit for their work? do code authors have a say?

    - by J Smith
    The company I work for has decided that the source code for a set of tools they make available to customers is also going to be made available to those customers. Since I am the author of that source code, and since many source code files have my name written in them as part of class declaration documentation comments, I've been asked to remove author information from the source code files, even though the license headers at the beginning of each source file make it clear that the company is the owner of the code. Since I'm relatively new to this industry I was wondering whether it's considered typical for companies that decide to make their source code available to third parties to not allow the code authors to take some amount of credit for their work, even when it's clear that the code author is not the owner of the code. Am I right in assuming that I don't have a say on the matter?

    Read the article

  • Super simple CSS tooltip in a table, why is it not displaying and can I make it work?

    - by Kyle Sevenoaks
    Hi, I have been trying to implement many different tooltips on this page for my client, he's adamant that we have a picture of the product show up when you hover over the product name in the order page. I decided to use the super simple CSS tooltip, it's very easy to implement and does exactly what we want. It works on a dynamic page, the others I tried didn't. I have made an example here: CSS tooltip in table example. This page used to be displayed using divs, I have since changed it to a table, as it's tabular data and easier to work with. It worked fine when it used divs, now it's in a table, it won't display the span on hover. My questions are: Why is it not working? How can I make it work? If not, does anyone know another super easy to implement tooltip that can work properly on a dynamic page? Here's the DIV tooltip for reference: DIV display tooltip. Edit: Just noticed it kinda half works in IE8. Thanks.

    Read the article

  • Linq: Why won't Group By work when Querying DataSets?

    - by jrcs3
    While playing with Linq Group By statements using both DataSet and Linq-to-Sql DataContext, I get different results with the following VB.NET 10 code: #If IS_DS = True Then Dim myData = VbDataUtil.getOrdersDS #Else Dim myData = VbDataUtil.GetNwDataContext #End If Dim MyList = From o In myData.Orders Join od In myData.Order_Details On o.OrderID Equals od.OrderID Join e In myData.Employees On o.EmployeeID Equals e.EmployeeID Group By FullOrder = New With { .OrderId = od.OrderID, .EmployeeName = (e.FirstName & " " & e.LastName), .ShipCountry = o.ShipCountry, .OrderDate = o.OrderDate } _ Into Amount = Sum(od.Quantity * od.UnitPrice) Where FullOrder.ShipCountry = "Venezuela" Order By FullOrder.OrderId Select FullOrder.OrderId, FullOrder.OrderDate, FullOrder.EmployeeName, Amount For Each x In MyList Console.WriteLine( String.Format( "{0}; {1:d}; {2}: {3:c}", x.OrderId, x.OrderDate, x.EmployeeName, x.Amount)) Next With Linq2SQL, the grouping works properly, however, the DataSet code doesn't group properly. Here are the functions that I call to create the DataSet and Linq-to-Sql DataContext Public Shared Function getOrdersDS() As NorthwindDS Dim ds As New NorthwindDS Dim ota As New OrdersTableAdapter ota.Fill(ds.Orders) Dim otda As New Order_DetailsTableAdapter otda.Fill(ds.Order_Details) Dim eda As New EmployeesTableAdapter eda.Fill(ds.Employees) Return ds End Function Public Shared Function GetNwDataContext() As NorthwindL2SDataContext Dim s As New My.MySettings Return New NorthwindL2SDataContext(s.NorthwindConnectionString) End Function What am I missing? If it should work, how do I make it work, if it can't work, why not (what interface isn't implemented, etc)?

    Read the article

  • What would it take to get auto-revert-mode to actually work in my dired buffer?

    - by Cheeso
    Apparently auto-revert-mode is supposed to work in dired buffers. I had never heard of this, but the doc says it works. Then I read a little more and found some fine print: Auto-reverting Dired buffers currently works on GNU or Unix style operating systems. It may not work satisfactorily on some other systems. ...and... [dired buffers] do not auto-revert when information about a particular file changes (e.g. when the size changes) or when inserted subdirectories change. To be sure that all listed information is up to date, you have to manually revert using g, even if auto-reverting is enabled in the Dired buffer. source Well, uh, gee.... That doesn't sound like autorevert to me. What would it take to get auto-revert for dired to actually work? Even on (gasp) non-Unix operating systems. Could I just modify auto-revert-handler to call revert-buffer on dired buffers?

    Read the article

  • Xrandr settings don't stick; bash script fails; Nvidia; disper doesn't work

    - by bcmcfc
    I have an Nvidia graphics card. Yeah, I know. This might have to change soon. I am trying to get the resolution to set correctly on my second monitor, which is a VGA monitor with a native resolution of 1440x900. I have set up this bash script which doesn't work: xrandr --newmode "1440" 106.50 1440 1528 1672 1904 900 903 909 934 -hsync +vsync xrandr --addmode -display VGA-1 1440 xrandr --output VGA-1 --mode 1440 It outputs: xrandr: cannot find mode 1440. It has worked previously when running the commands individually but after a restart it fails. I've also just installed disper as per this question but have no idea how to use it - the help suggests the command: disper -d VGA-1 -r "1440x900" should work, but it does not. It just outputs the help text, suggesting something is wrong but not saying what. Is there any way to get these horrific Nvidia drivers working properly with a resolution that is not detected by the tools?

    Read the article

  • How can I get sessions to work if I'm using Google App Engine + Django 1.1?

    - by user341642
    Is there a way for me to get sessions working? I know Django has built in session management, and GAE has some tools for it if you're using their watered down version of Django 0.96, but is there a way to get sessions to work if you're trying to use GAE w/ Django 1.1 (i.e. use_library() call). I assume using a db-backed session doesn't work, and a file system backed one won't work b/c we don't have access to the filesystem if we deploy to the Google production servers. This kinda worked (as in didn't crap out) when I used SessionMiddleware backed by a local-memory backed cache and a non-persistent cache (i.e. setting SESSION_ENGINE to django.contrib.sessions.backends.cache). But the session never seems to persist in this case, no matter how I set the timeouts. A new session key is generated on every page reload. Maybe this is b/c the GAE assumes complete statelessness with each request and blows away my local cache? Apologies in advance, I'm pretty new to Python. Any suggestions would be greatly appreciated.

    Read the article

  • What happens with $q.all() when some calls work and others fail?

    - by Alan
    What happens with $q.all() when some calls work and others fail? I have the following code: var entityIdColumn = $scope.entityType.toLowerCase() + 'Id'; var requests = $scope.grid.data .filter(function (rowData, i) { return !angular.equals(rowData, $scope.grid.backup[i]); }) .map(function (rowData, i) { var entityId = rowData[entityIdColumn]; return $http.put('/api/' + $scope.entityType + '/' + entityId, rowData); }); $q.all(requests).then(function (allResponses) { //if all the requests succeeded, this will be called, and $q.all will get an //array of all their responses. console.log(allResponses[0].data); }, function (error) { //This will be called if $q.all finds any of the requests erroring. var abc = error; var def = 99; }); When all of the $http calls work then the allResponses array is filled with data. When one fails the it's my understanding that the second function will be called and the error variable given details. However can someone help explain to me what happens if some of the responses work and others fail?

    Read the article

  • How does a vsftpd server work and how to configure it?

    - by ysap
    I was asked to configure a FTP server, based on the vsftpd package. The server is running on a remote machine to which I have a superuser privilege access. Being unfamiliar with the mechanics of FTP servers, I tried to figure out how user ftp accounts are configured. The previous maintainer used a shell script, which works on a list that we maintain to track users accounts and passwords, to configure the ftp accounts. From reading the script, I see that he generates a list of usernames and passwords, and actually creates a user account on the Linux machine. This means that for each user that we configure in the list, a new user account is being added by the adduser command: adduser --home /home/ftp --no-create-home $user (but w/o a private /home/username directory - using the /home/ftp instaed). Each of these users can log into his account using the ssh command. This fact seems a little strange to me, as I'd think that the ftp account should be decoupled from the Ubuntu user accounts. As another side effect, when a user connects using a web browser, he is connected to the /home/ftp directory. However, he can then use "Up to a higher level directory" link to go up and effectively have access to all of our system. So, the questions are: Is this really how the FTP server supposed to work in terms of configuring ftp accounts? If not, how do I configure the vsftpd server in a way that I have only the superuser Ubuntu account on that machine and all ftp account are... just FTP user accounts? Additionally, these ftp account should be configured in terms of how and what they are allowed to access.

    Read the article

  • How to configure a Router (TL-WR1043ND) to work in WDS mode?

    - by LanceBaynes
    I have a WRT160NL router (192.168.1.0/24 - OpenWrt 10.04) as AP. It's: - WAN port: connected to the ISP - WLAN: working as an AP, using 64 bit WEP/SSID: "MYWORKINGSSID", channel 5, using password: "MYPASSWORDHERE" - It's IP Address is: 192.168.1.1 Ok! It's working great! But: I have a TL-WR1043ND router that I want to configure as a "WDS". (My purpose is to extend the wireless range of the original WRT160NL.) Here is how I configure the TL-WR1043ND: 1) I enable WDS bridging. 2) In the "Survey" I select my already working network. 3) I set up the encryption (exact same like the already working one) 4) I choose channel 5 5) I type the SSID 6) I disable the DHCP server on it. After I reboot the router and connect to this router (TL-WR1043ND) over wireless I'm trying to ping google.com. From the ping I see that I can reach this router, that's ok, but it seems like that this router can't connect to the original one, the WRT160NL (so I don't get ping reply from Google). The encryption settings/password is good I checked it many-many-many times. what could be the problem? I'm thinking it could be a routing problem, but what should I add to the "Static Routing" menu? I tried to change the IP address of the TL-WR1043ND to: 192.168.1.2 So if this a routing issue then I should add a static routing rule that says: If destination: any then forward the packet to 192.168.1.1 p.s.: I updated the Firmware to the latest version. It's still the same. p.s.2: The HW version of the TL-WR1043ND is 1.8 p.s.3: Could that be the problem that I use different routers? (If I would buy.. another TL-WR1043ND and use it instead of the WRT160NL, and with normal Firmware, not OpenWrt, then it would work?? The "WDS" is different on different routers?) p.s.4: I will try to check the router logs@night - and paste it here! :\

    Read the article

  • How can I get WAMP and a domain name to work on a non-standard port?

    - by David Murdoch
    I have read countless articles on setting up a domain on WAMP to listen on a port other than 80; none of them are working. I've got Windows Server 2008 (Standard) with IIS 7 installed and running on port 80 (and 443). I've got WAMP installed with the following configuration. Listen 81 ServerName sub.example.com:81 DocumentRoot "C:/Path/To/www" <Directory "C:/Path/To/www"> Options All MultiViews AllowOverride All # onlineoffline tag - don't remove Order Allow,Deny Allow from all </Directory> localhost:81 works with the above configuration but sub.example.com:81 does not. Just to make sure my firewall wasn't getting in the way I have disabled it completely. My sub.example.com domain is already pointing to my server and works on IIS on port 80. Also, if I disable IIS and change the Apache port from 81 to 80 it works. Yes, I am restarting Apache after each httpd.conf change. :-) I don't need any other domain (or sub domains [I don't even care about localhost]) configured which is why I'm not using a VirtualHost. Any ideas what is going on here? What could I be doing wrong? Update Changing Listen to 80 but keeping ServerName as sub.example.com:81 causes navigation to sub.example.com:80 to work; this just doesn't seem right to me. Could ServerName be ignoring the :port part somehow? netstat -a -n | find "TCP": >netstat -a -n | find "TCP" TCP 0.0.0.0:81 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:912 0.0.0.0:0 LISTENING ... TCP 127.0.0.1:81 127.0.0.1:49709 TIME_WAIT ...

    Read the article

  • How to install smtp/email server to work with php script?

    - by jiexi
    I have this code $mail->IsSMTP(); $mail->SMTPAuth = true; $mail->SMTPSecure = "ssl"; $mail->Host = "mail.craze.cc"; $mail->Port = 465; $mail->Username = "username"; $mail->Password = "pass"; $mail->SetFrom("[email protected]", "craze.cc"); $mail->AddReplyTo("[email protected]", "craze.cc"); $mail->AddAddress($this->email, $this->username); $mail->IsHTML(false); $mail->Subject = "Activate Your Craze.cc Account"; $mail->Body = $message;`enter code here` How i configure my postfix/sendmail or whatever server to actually work and send the mail? This has been driving me insane! I've tried numerous times to configure these servers. I just want to be able to send emails via my php script... Can someone please link me to a guide to get this all going? or just provide help themselves? Maybe there is an alternative way i can use to send my email in the php script? Basically, i need help just getting the emails to send...

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • Building a computer for the first time from scratch - will this really work?

    - by Nike
    Hey there. I'm building my own computer, and i just finished picking out all of my parts. Now i just want to be sure it'll all work before i order it. I'm mean specifically if the RAM & Graphic card will fit on the motherboard i chose. Below is a list of all parts. I'm sorry, i selected the parts from a Swedish website so it might be hard to understand some parts. Google Translate will help ;) I really appreciate any help/suggestions. Thanks in advance! :) Oh, and here's the parts i mentioned: EDIT: As i'm not allowed to post more than one link here, i'll just link to my homepage: http://nike1.se/c/ I know i didn't choose the most expensive parts, but this won't be my primary computer. I'm only going to use this one for testing purpose, if that makes any sense. I'm sorry for my english, i'm just so tired now. Haha! Once again, thanks in advance! :)

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • How can I get Windows 7 to work with two Nvidia graphics cards with different drivers?

    - by Max
    This is similar to this question, but I am using more similar cards with Windows 7. I just purchased a Zotac Nvidia GeForce 7200 GS. I have a motherboard with two PCI Express x16 slots. There is already an MSI Nvidia GeForce 8800 GTS being used as the primary card, driving two LCD monitors. I would like the Zotac to output to a TV via DVI-out. Unfortunately, when Windows detects the Zotac and installs its drivers, or I manually install them, Windows stops being able to boot up. If I remove them and re-install the MSI 8800 drivers, I can boot again, but Windows can no longer see the Zotac 7200--it shows up as a yellow triangle in Device Manager. I've read conflicting reports about this. Some people claim that Windows 7 will support multiple heterogeneous graphics card drivers, as long as they are all using the same driver API ("WDDM?"). Others say that they have to be using the exact same driver, or it won't work. Others claim that you have to use the exact same card. which is it, exactly? I know I can run the MSI 8800 in SLI if I purchase another, but I don't need that kind of power--I just need HD-out to my television. I read somewhere that running two cards in SLI precludes you from using 100% of their output ports, so I'm not sure if that's an option. I suppose I could also run two MSI 8800's without SLI, but again, that's more power than I need (and more money than I'd like to spend). Also, I don't think this exact model is even manufactured anymore. Any ideas?

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • How to make project auto-estimate duration based on work?

    - by Bruno Brant
    This one has bothered me for a long while. I like to do estimates thinking on how much time a certain task will take (I'm in TI business), so, let's say, it takes 12 hours to build a program. Now, let's say I tell Project that my beginning date is today. If I allocate one resource to this task, it means that the task will last 1,5 days, implying that it will end tomorrow. But right now, that is not what it's doing. I say that the task will take 1 hour, and when I add a resource to it, it allocate the resource at [13%] basis, which means that the duration is still fixed... project is trying to make the task last for a day. I have, on many occasions, accomplished this. What I do is build a plan based on these rough estimates for effort, then I allocate tasks to resources. Times conflict, so I level resources and then Project magically tells me how long, in days, will it take. But every time I have to start estimating again, I end up having trouble on how to make project work like that.

    Read the article

  • I have enabled hidden administrator in Win 7 home, but programs still dont work.

    - by Angela
    I have Windows 7 Home Premium, and would like to do some maintenance tasks such as running Disk Defragmenter. However, this and other programs and applications that I'm accustomed to using are now blocked. For these programs, there is a shield icon next to their icons and nothing happens when I click on them. I notice that the screen blinks slightly, but I do not get prompted for a password and the program still does not run. It seems these programs may only be accessible through an Administrator account. However, right-clicking and selecting "Run As Administrator" does not work. After some research, I found a way to enable the hidden built-in Administrator account. I booted the computer into safe mode. In the command prompt, I typed net user administrator /active:yes. I gave the account a password. I rebooted the system. There is now an Administrator account on the home screen. However, the locked programs behave no differently for me when I use this account. What could cause this problem? How can I fix it?

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >