Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 427/590 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Environment variables in bash_profile or bashrc?

    - by Viriato
    I have found this question [blog]: Difference between .bashrc and .bash_profile very useful but after seeing the most voted answer (very good by the way) I have further questions. Towards the end of the most voted, correct answer I see the statement as follows : Note that you may see here and there recommendations to either put environment variable definitions in ~/.bashrc or always launch login shells in terminals. Both are bad ideas. Why is it a bad idea (I am not trying to fight, I just want to understand)? If I want to set an environment variable and add it to the PATH (for example JAVA_HOME) where it would be the best place to put the export entry? in ~/.bash_profile or ~/.bashrc? If the answer to question number 2 is ~/.bash_profile, then I have two further questions: 3.1. What would you put under ~/.bashrc? only aliases? 3.2. In a non-login shell, I believe the ~/.bash_profile is not being "picked up". If the export of JAVA_HOME entry was in bash_profile would I be able to execute javac & java commands? Would it find them on the PATH? Is that the reason why some posts and forums suggest setting JAVA_HOME and alike to ~/.bashrc? Thanks in advance.

    Read the article

  • How to make 7-Zip faster

    - by Matt
    I normally use WinRAR over 7-Zip simply because it's faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7-Zip and WinRAR default settings on their normal compression and their best compression, and in a lot of cases WinRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7-Zip speed up? I'd like it to at least be on par with WinRAR's speed Is there a way to make recovery segments in 7-Zip like you can in WinRAR? I didn't see any, but I guess it could be a command line thing. I tested WinRAR and 7-Zip using the latest stable version of each (4-dot-something with 7-Zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core Intel i7 720 (1.6 GHz)/(2.8 GHz) with 4 GB DDR3 RAM, and the 64-bit version of 7-Zip, and dual-boot Debian x64 5.0.4 and Windows 7 Home.

    Read the article

  • Write access from a Windows client via a ZFS SMB, to a file created on the host in OpenIndiana

    - by Gerald Kaszuba
    I've got an OpenIndiana server running ZFS that is shared using a nobody user and group. I don't fully understand Solaris ACL permissions, but I do know Linux style permissions. The client is Windows 8 and the server is OpenIndiana is oi_148. I'm failing to work out how to make write permission work correctly for the Windows client. It is able to make new files, but can not modify files created by the shell in OpenIndiana. When a file ("local file") is created locally as the user nobody in bash, and another file ("smb file") created remotely via SMB (as nobody also), they are quite different in permissions: # ls -V -rw-r--r-- 1 nobody nobody 0 Dec 2 12:24 local file owner@:rw-p--aARWcCos:-------:allow group@:r-----a-R-c--s:-------:allow everyone@:r-----a-R-c--s:-------:allow -rwx------+ 1 nobody nobody 0 Dec 2 12:24 smb file user:nobody:rwxpdDaARWcCos:-------:allow group:2147483648:rwxpdDaARWcCos:-------:allow In bash, I'm able to write to smb file, but vice versa, the Windows client is not able to write to local file. This is confusing to me because it appears that it should allow the SMB client to write to local file, because nobody is the owner and it has a w in the ACL. The sharesmb setting is is fairly boring, although I'm hoping there can something to set in here similar to a umask: sharesmb name=shared,guestok=true How can I make these two work together and have a symmetrical permission system, where both SMB and the local user produce the same permissions? Is there some sort of ACL that can set at the root of the file system to allow all files to be created in a similar manner?

    Read the article

  • Windows 7 deployment thru WDS

    - by vn
    Hello, I am deploying new systems on my network and I built my reference computer by installing the OS the manufacturers (Dell and a custom built system from some local business) gave with all drivers, installed all the desired applications. As for the settings part, I'm doing most of it thru GPOs. I want to image my reference computer and deploy it with WDS. i found several links on how to sysprep, but they're all doing it with some differences without explaining them. My questions : How do I manage (into sysprep) the domain join/computer naming part since (from what I understand) WDS manages that? How do I know/determine what I need to setup into my sysprep.xml? Can you sysprep a first time, try and if it fails, do some modifications and try again? I am thinking of doing a basis sysprep, checking what info can be automated and correct that in the answer file. What do I miss if skipping the "audit" mode? I don't plan on re-doing the reference computer... I read that when sysprepping, it resets settings from the reference computer like the computer name, activation/key and such... what setting is sysprep resetting by default that I should be aware of? I must admit I am quite lost about Win7, sysprep, RIS, MDI toolkit, WDS.. I understand the way of doing with XP, but it changed so much with Windows 7! The links I am reading are : http://far2paranoid.wordpress.com/2007/12/05/prep-for-sysprep/ http://blog.brianleejackson.com/sysprep-a-windows-7-machine-%E2%80%93-start-to-finish-v2 http://www.ehow.com/print/how_5392616_sysprep-machine-start-finish-v2.html Thank you VERY much for any answers, they are much appreciated.

    Read the article

  • DisableCrossAccountCopy not working on some Outlook installs, working on others, both going against Exchange

    - by MikeBaz
    As part of a mail migration project from one Exchange organization to another, we need to be able to prevent users from moving/copying messages between their accounts in each organization. (Yes, users will think this is evil; no, it's not my decision; yes, users will hate us.) Luckily, we thought, Outlook 2010 provides the DisableCrossAccountCopy registry value/policy (cf. http://technet.microsoft.com/en-us/library/ff800883.aspx). (Because you can't do multiple Exchange organizations in a single profile before Outlook 2010, this only matters on Outlook 2010. Yes, I'm ignoring for the sake of this question copy/move to/from the filesystem.) In our test lab, in a test forest with a test Exchange organization, with a second Exchange account added to the profile in either of the "real" Exchange organizations, with the value set to "*", everything works as expected. On a workstation in one of the production domains, however, the setting does not seem to work. We have tried it under HKCU, HKLM, HKCU\Software\Policies, and HKLM\Software\Policies. It simply seems to be ignored. The value was set in the OCT on a test machine, but the OCT (and the ADM/ADMX file) have the wrong type for the value. We have located the value in the registry and removed it everywhere it is found, we think, and put it back in HKCU, but it still isn't taking. At the moment, a clean Outlook install is not an option - even if it was, we at this point would need to know what to do to fix the pushed copy (I didn't push the copy out to thousands of machines, I've just been asked to help clean up the current mess). Thoughts?

    Read the article

  • What should I do to make sure that IIS does not recycle my application?

    - by AngryHacker
    I have a WCF service app hosted in IIS. On startup, it goes and fetches a really expensive (in terms of time and cpu) resource to use as local cache. Unfortunately, IIS seems to recycle the process on a fairly regular basis. So I am trying to change the settings on the Application Pool to make sure that IIS does not recycle the application. So far, I've change the following: Limit Interval under CPU from 5 to 0. Idle Time-out under Process Model from 20 to 0. Regular Time Interval under Recycling from 1740 to 0. Will this be enough? And I have specific questions about the items I changed: What specifically does Limit Interval setting under CPU mean? Does it mean that if a certain CPU usage is exceeded, the application pool will be recycled? What exactly does "recycled" mean? Is the application completely torn down and started up again? What is the difference between "Worker Process shutdown" and "Application Pool recycling"? The documentation for the Idle Time-out under Process Model talks about shutting down the worker process. While the docs for Regular Time Interval under Recycling talk about application pool recycling. I don't quite grok the difference between the two. I thought the w3wp.exe is the worker process which runs the application pool. Can someone explain the difference to the application between the two? The reason for having IIS7 and IIS7.5 tags is because the app will run in both and hope the answers are the same between the versions. Image for reference:

    Read the article

  • Postfix aliases and duplicate e-mails, how to fix?

    - by macke
    I have aliases set up in postfix, such as the following: [email protected]: [email protected], [email protected] ... When an email is sent to [email protected], and any of the recipients in that alias is cc:ed which is quite common (ie: "Reply all"), the e-mail is delivered in duplicates. For instance, if an e-mail is sent to [email protected] and [email protected] is cc:ed, it'll get delivered twice. According to the Postfix FAQ, this is by design as Postfix sends e-mail in parallel without expanding the groups, which makes it faster than sendmail. Now that's all fine and dandy, but is it possible to configure Postfix to actually remove duplicate recipients before sending the e-mail? I've found a lot of posts from people all over the net that has the same problem, but I have yet to find an answer. If this is not possible to do in Postfix, is it possible to do it somewhere on the way? I've tried educating my users, but it's rather futile I'm afraid... I'm running postfix on Mac OS X Server 10.6, amavis is set as content_filter and dovecot is set as mailbox_command. I've tried setting up procmail as a content_filter for smtp delivery (as per the suggestion below), but I can't seem to get it right. For various reasons, I can't replace the standard OS X configuration, meaning postfix, amavis and dovecot stay put. I can however add to it if I wish.

    Read the article

  • puppet master --compile logs errors to stdout

    - by danny
    I see a bug about this that was accepted and then closed a year ago: http://projects.puppetlabs.com/issues/3670 but I'm using puppet 2.7.14 and am getting the same issue. I'm trying to use "puppet solo" (i.e. just running puppet apply on each server to be configured) as I only have 2 or 3 servers in this project and adding another server as a puppetmaster would be completely overkill. Unless I'm mistaken, the best way to apply a node manually to a server is to do: puppet master --compile=mynode > catalog.json puppet apply --catalog catalog.json But the puppet master command outputs a couple of warnings and notices to stdout, mixed in with the desired json content. And it uses colored output so I can't just pipe it through egrep -v '^warning:' EDIT: I guess it's not too big of a deal to use grep - since puppet 2.7 pretty-prints the actual content and the warnings don't ever start with spaces, piping the output through egrep '^( |{|})' works So my questions are basically: Is there a better way than this to apply a puppet node without using a puppetmaster? I can't really find any good references online to using puppet without a puppetmaster, even though that seems like a perfectly reasonable thing to do for a small project. Is there a setting or flag that I'm missing that will get puppet master to stop being an asshole and send its errors to stderr instead of stdout? Or do I really have to turn off color logging, then grep to exclude warning: and notice: lines?

    Read the article

  • Plesk 10 - creating and using vhost.conf

    - by MrFidge
    I'm having some issues setting up and using a vhost.conf for one of my domains. So far none of the domains have required any extra configuration but now I need to use a PEAR module, so I'm looking to include /usr/share/pear in the PHP settings for the domain. vhost file created in /var/www/vhosts/domain.com/conf/vhost.conf <Directory /var/www/vhosts/domain.com/httpdocs> php_admin_value include_path ".:/usr/share/pear" </Directory> I then restart Plesk using: /usr/local/psa/admin/sbin/websrvmng --reconfigure-vhost --vhost-name=domain.com Or as plesk says that command is obsolete in Plesk 10 I've tried using /usr/local/psa/admin/sbin/httpdmng --reconfigure-domain domain.com And for good luck I've restarted apache too each time. Net result - none of the PEAR includes work unless I edit the include_path in /etc/php.ini! Any tips on how to get this MOFO working? I've had a look through the documentation but TBH I just don't have time to read 40 pages of Plesk manual for one line of code, this can't be that hard, surely! Thanks for any pointers, H

    Read the article

  • How to use webdav and user dir in the same time in the same section ?

    - by Louis
    Dear community, i would like to mount trough webdav my https://myserver/~user_account but not https://myserver/. What i am doing now is : <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /banonymous/data/home/*/public_html> DAV On AllowOverride FileInfo Limit AuthConfig Options MultiViews SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> and i am setting the authentification in the .htaccess of eache user. AuthType Basic AuthName "Password Required" AuthUserFile /etc/apache2/users/htpasswd Require User geeky It does not work. Is there someone who can tell me if it is possible ? and if it is how to do it. My dream would have been to put the Dav On in the .htaccess.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP.

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing. EDIT: If I currently have been trying: sudo route flush sudo route add default 192.168.19.1 This gets everything to work for about a minute. But after such minute it "forgets" the routing to WiFi while retaining LAN's (en0) routing. If I unplug and replug my LAN (en0) cable, the process works for another minute.

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • 'txn-current-lock': Permission denied [500, #13] - Subversion + Apache Configuration Issue

    - by wfoster
    Current Setup Fedora 13 32bit Apache 2.2.16 Subversion repositories setup under /var/www/svn I have two different repositories under this directory so my /etc/httpd/conf.d/subversion.conf setup in this way; LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /svn> DAV svn SVNListParentPath on SVNParentPath /var/www/svn <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/httpd/.htpasswd Require valid-user </LimitExcept> </Location> After copying over my repos and using; chmod 755 -R /var/www/svn chcon -R -t httpd_sys_content_t /var/www/svn chown apache:apache -R /var/www/svn I can browse my repos fine through the browser, and I can update all my working copies, however when I try to check in from anywhere I get the same error Can't open file '/var/www/svn/repo/db/txn-current-lock':Permission denied I have been working on this issue for a while now and cant seem to find a solution to my issues. It might be of some use to know that the repo existed on a different server before this, it has been now moved to this new server. Everything I have read seems to indicate that the permissions for apache are incorrect, however apache is set to run as User apache and Group apache. So as far as I can tell my setup is correct. The behavior is not though. Any Ideas? Solution The only way I was able to get this to work is to disable SELinux, it could also be done by setting the proper booleans with SELinux via setsetbool and getsebool since this is just a home server, I decided to disable SELinux and am reaping the benefits now.

    Read the article

  • LAN network (switch?)

    - by guywhoneedsahand
    I am working on setting up the network for a small LAN party (less than 16 people). Most of them do not have wireless cards in their rigs, so I need to set up some way for everyone to a) play LAN games and b) access the internet. The LAN party will probably take place in my basement, where I have enough space. However, the basement is not wired up with the router which is actually on the floor above. I make a cantenna a while back that can boost the wireless performance of my computer significantly. How can I use this to provide internet and LAN to guests? My hope was that I could use a switch like this http://www.newegg.com/Product/Product.aspx?Item=N82E16833181166 for the LAN - but how can I give people access to the internet? Is there such thing as a network extender / 16-port switch? Obviously, the internet performance doesn't need to be super stellar, because the games will be using LAN - so I am looking to provide some usable internet for web browsing, and very high speed LAN for games. Thanks!

    Read the article

  • Configuring suExec to work with Apache and PHP via FastCGI

    - by RandomPsychology
    I have installed ISPConfig 3 on an Ubuntu VPS and configured it for Apache + PHP via FastCGI and suexec. I am able to upload PHP apps (e.g. Wordpress) and run them normally w/ suexec. However, for some reason the PHP scripts cannot write data to disk. For instance, trying to upgrade a plugin via Wordpress' web interface causes it to fail with the error "Could not create directory /path/to/wp-content/upgrade/plugin.tmp." Trying to upload media and other assets also fails via the web. I've checked owner/group on the directory structure and it looks good. The suExec log also seems to be normal and I don't see any indicative errors in the web server logs. I can also confirm that changing the owner/group on the directories does result in the expected error in suexec.log. Additionally, I have the directory permissions set to u=rw,g=r,o= and I've also tried setting g=rw. None of this results in my scripts being able to write to the directories. What am I doing wrong?

    Read the article

  • Socket options for a tcp server with 3G clients & frequent disconnections

    - by Joel
    I have a TCP server, written in java, sending and receiving many short messages, from 500 bytes to 100 KB long. It's a chess game and chat server, to make it simple. The server is running Debian 6. Half of the clients are connecting from 3G networks, and half over standard DSL. A portion of the 3G clients lose connection pretty often. The error I get on the server and on the client socket is Connection reset. I have come across this page at Oracle documentation: socketOpt. I am wondering what I could tune there to lower the number of disconnections from 3G clients. I don't mind about the ping or transfer rate, but just about the TCP disconnections. I am not skilled enough to understand the impact of each setting, but I sort of understood that the TCP window was important, although I don't know exactly how. So I'm asking if anyone here has an idea ? Thanks if you can help.

    Read the article

  • Cannot establish a network connect to VMWare Fusion VM

    - by twolfe18
    I am running Snow Leopard 10.6.2 (not the server edition) with VMWare Fusion 3.0.0 and I trying to get my Ubuntu 9.10 x86_64 VM working. I am using a bridged connection, and I have an internet connection FROM the Ubuntu VM (I can download updates, ping websites, etc), but I cannot connect TO the Ubuntu box from any other device on my network. I am trying to get Mongrel up on the Ubuntu VM for some Rails stuff, but it's not working. I know Mongrel/Rails is not the problem because if I start the server on the Ubuntu VM, background the process, and then wget the index page, it works. I just cannot connect to the site from another IP. I have tried using a static IP and a DHCP IP configuration on the Ubuntu VM, neither work (for incoming connections, both work for outwards). I have port scanned the Ubuntu VM, and it appears that all ports are closed. However, the Ubuntu VM does respond to pings. I noticed a similar question here: http://serverfault.com/questions/99757/setting-up-a-bridged-network-with-vmware-fusion, but no answer. Any ideas?

    Read the article

  • Cannot print certain colours on Ubuntu with HP Laser Printer

    - by ILMV
    We have a load of machines running Ubuntu in our office, they are either on 8.04 or 9.10. We have a server which connects a HP JetDirect that connects to a HP 3550 Colour Laser printer using CUPS. The problem we are having is we cannot print red, magenta or yellow at 100%, I've got a picture of the Ubuntu test page to demonstrate my problem: This is obviously a pretty big problem as we are constantly receiving documents with these colours and cannot successfully print them off, we cannot just switch the grayscale, our business depends on being able to print colour (seems trivial but we handle lots of artwork). We're using the recommended driver HP Color LaserJet 3550 footmatic/pxljr (recommended), there is another driver in the list labelled HP Color LaserJet 3550 footmatic/hpijs. These are production printers so need to make sure any setting change won't kick is in the nuts. It would appear HPIJS is for HP Inkjets, makes sense I guess. The problem doesn't occur in Windows. RESOLVED I've managed to solve the problem, I did indeed use the HPIJS driver (apparently for inkjets) but it seems to have worked, we're going to roll with it for now to see how we get on with it.

    Read the article

  • Good PC-based video conference control tools?

    - by Jeremy Wadhams
    I'm looking to improve the user experience in our video conference rooms, by simplifying the things users do all the time (setting up a call, muting and unmuting, panning and zooming between a few room-appropriate presets) and totally taking away the functions that we don't use or that are more likely to ruin the experience (changing the brightness of the TV, using the VNC presenter mode). The rooms each have Tandberg Edge or MXP video units, I'm not looking at PC-based solutions like Skype or iChat. The classic way to do this would be to plunk down huge money for an AMX or Crestron panel. This does have some advantages, and I am considering that approach, but in my initial investigation, that looks expensive, inflexible, and proprietary. On the other hand, the Tandberg video units I'm looking to control are very thorough XML API (PDF), so some of the integration magic that Crestron and AMX consider to be a value add, I could reproduce at mashup speeds. Anyone aware of an Open Source or Commercial product that takes advantage of the readily available web APIs, some simple touch screen PCs, and builds a product that is more like skinning an AJAX web app, and ideally more cost effective than the proprietary panels?

    Read the article

  • Monitor connected with WiDi just shows a black screen

    - by Pops
    I have a Dell XPS 18 and want to use an external monitor with it. The monitor has VGA and DVI-D inputs. The XPS 18 has no video output ports, but it does support Intel WiDi. I have a Netgear P2TV2000 WiDi receiver, but it only has composite and HDMI outputs. I'm connecting the receiver to the monitor with an HDMI cable and an HDMI/DVI-D adapter. So, in short, the video path is: Computer → WiDi → WiDi receiver → HDMI cable → HDMI/DVI-D adapter → Monitor After setting all this up, I can't get anything to display on the monitor. At the moment I power the monitor on, I can see my desktop for a brief fraction of a second, and then everything goes black. All of the relevant drivers have been updated to the latest versions. When I use a different WiDi-enabled computer to connect to the same monitor, everything works fine. When I use the original computer and receiver to connect to a TV, everything works fine. It's only when I connect the original computer to the monitor I want to use that the connection fails. What could be going wrong here, and how can I get the video to work consistently?

    Read the article

  • Windows 7 autostarting apps as administrator

    - by Fujishiro
    Hello everyone. The question is easy to answer I guess. (Tried to search here, didn't find an answer.) So. The deal: There is a bug in OpenOffice (haha ..just one? :)), which prevents the spellcheck. You have to start it with right-click , run as administrator to make it work. Tried also setting this at the 'properties', but it didn't work. But 'quickstart.exe' would be also enough to make this work. (OpenOffice's quickstarter). So I'd like to run it at boot, as an admin, like I'd do with right-click. How to do THAT? (Actually there is a different bug for spellcheck on Win7. One have to run oowriter.exe as an admin, and then install the extension BY HAND from Program Files...\share...*.oxt. And THEN it'll work IF you run the app as an admin. (I'll buy SoftMaker's office as soon as the hu spellcheck arrives, but until then I have to make this work. Thanks for the answers in advance.))

    Read the article

  • Windows xp mapped drives disconnect from Windows 7 on Peer to Peer network

    - by Kathryn Codo
    I have a 5 user peer to peer network. 2-XP machines, 3- Windows 7 machines and a Windows 7 machine acting as the file server. I have NO issues with the Windows 7 machines staying connected via the mapped drives to the "server". However, once a day the 2 XP machines will not connect to the Windows 7 machine "server" via the mapped drives. I must restart the "server" in order to see the server and access the files via the mapped drives or Explorer. I have tried the persistent : yes command in NET USE, on the XP machines and also setting a static IP address on the "server". I've turned the firewall off on the "server:, no difference so I turned the firewall back on. No interruption occurs with the internet when this happens, just can't see the server. Again, the Windows 7 users are unaffected. I have gone into the advanced network settings on the "server" and added read/write permissions for Everyone as well as the users of the XP machines. I have double checked that we are all on the same WORKGROUP. I'm at a loss. It seems to happen around mid-day and I've not been able to find any activity that is happening at this time (Like someone plugging in a thumbdrive). Any suggestions would be greatly appreciated as my client is running old XP software he no longer has the disks for and needs for his engineering firm.

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • snort analysis of wireshark capture

    - by Ben Voigt
    I'm trying to identify trouble users on our network. ntop identifies high traffic and high connection users, but malware doesn't always need high bandwidth to really mess things up. So I am trying to do offline analysis with snort (don't want to burden the router with inline analysis of 20 Mbps traffic). Apparently snort provides a -r option for this purpose, but I can't get the analysis to run. The analysis system is gentoo, amd64, in case that makes any difference. I've already used oinkmaster to download the latest IDS signatures. But when I try to run snort, I keep getting the following error: % snort -V ,,_ -*> Snort! <*- o" )~ Version 2.9.0.3 IPv6 GRE (Build 98) x86_64-linux '''' By Martin Roesch & The Snort Team: http://www.snort.org/snort/snort-team Copyright (C) 1998-2010 Sourcefire, Inc., et al. Using libpcap version 1.1.1 Using PCRE version: 8.11 2010-12-10 Using ZLIB version: 1.2.5 %> snort -v -r jan21-for-snort.cap -c /etc/snort/snort.conf -l ~/snortlog/ (snip) 273 out of 1024 flowbits in use. [ Port Based Pattern Matching Memory ] +- [ Aho-Corasick Summary ] ------------------------------------- | Storage Format : Full-Q | Finite Automaton : DFA | Alphabet Size : 256 Chars | Sizeof State : Variable (1,2,4 bytes) | Instances : 314 | 1 byte states : 304 | 2 byte states : 10 | 4 byte states : 0 | Characters : 69371 | States : 58631 | Transitions : 3471623 | State Density : 23.1% | Patterns : 3020 | Match States : 2934 | Memory (MB) : 29.66 | Patterns : 0.36 | Match Lists : 0.77 | DFA | 1 byte states : 1.37 | 2 byte states : 26.59 | 4 byte states : 0.00 +---------------------------------------------------------------- [ Number of patterns truncated to 20 bytes: 563 ] ERROR: Can't find pcap DAQ! Fatal Error, Quitting.. net-libs/daq is installed, but I don't even want to capture traffic, I just want to process the capture file. What configuration options should I be setting/unsetting in order to do offline analysis instead of real-time capture?

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >