Search Results

Search found 5723 results on 229 pages for 'turing machines'.

Page 128/229 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Configuring subdomains for a machine (Win2k8) in a lan

    - by RMS
    I am currently setting up a windows 2008 server to host a website with multiple subdomains, all accessible only within the lan. also, there is no active directory. what I did is : 1 - computer name : 'web' 2 - in IIS, I added a site binding as 'site1.web' to the default web site 3 - added DNS role to the server 4 - added 'web' as principal zone in direct lookup zones (default options) 5 - added CNAME 'site1' From a client machine, in tcpip config I added the ip address of 'web' to the DNS list in addition to the ISP DNS. (client machine ip is from DHCP) Now browsing to 'http://web' or 'http://site1.web' works correctly. My question is, is it possible throught additional steps in the server to have the websites accessible without requiring the DNS config in all the client machines ? thanks in advance

    Read the article

  • How to use sshd_config - PermitUserEnvironment option

    - by laks
    I have client1 and client2 both are linux machines. From client1: client1$ssh root@client2 "env" it displays list of ssh variables from client2. Things I did on client2: I want to add new variable to client2 . So I edited sshd_config to PermitUserEnvironment yes and created a file environment under ssh with following entry Hi=Hello then restart sshd /etc/init.d/sshd Now from client1 trying the same command client1$ssh root@client2 "env" didn't provide the new variable "Hi". ref: http://www.raphink.info/2008/09/forcing-environment-in-ssh.html http://www.netexpertise.eu/en/ssh/environment-variables-and-ssh.html/comment-page-1#comment-1703

    Read the article

  • Windows 2008 Domain Controller - Backup (BDC) to Primary (PDC)

    - by Klaptrap
    I have created a new domain controller with my single domain forest. I have also made it DHCP and DNS ready - all 3 services have synchronised with the existing W2K8 domain controller. I even migrated the FSMO roles and thought everything was fine. Indeed all machines on network appear to obtain DHCP and DNS from new server and the AD is working on the new server as my internal website uses it for login authentication. I have just noticed, via BgInfo - Sys Internals - that the new server is showing as "backup" and the old as "primary" - I thought I had already achieved this. Have the FSMO roles swapped back - as I have yet to have removed the old server from AD (dcpromo). Do I need to do anything before I run dcpromo on the old server? Any thoughts appreciated....

    Read the article

  • How to manage preventive maintenance planning for external IT support?

    - by code-gijoe
    I am a bit puzzled by the way to handle server upgrade planning for software we maintain on remote sites. This is my case: I work for a software company that has many external clients. We are trying to be more Agile in our development so we plan to release small improvements every quarter and we wish to keep our clients informed of maintenance schedules. Instead of having angry clients that believe there ROI of our support plan is low, we want to be more proactive. Lets say we have 100 machines to take care of, is there some tool to assist me in planing the maintenance with clients? Right now I get a call from a client that is unhappy requesting we upgrade them, that is when we go into panic mode and start making calls. That is when I need to check my calendar, coordinate with the other guys, call a few times, change the date again and again until everyone is happy. Can this be done better?

    Read the article

  • Connect to SQLite Database using Eclipse (Java)

    - by bnabilos
    Hello, I'm trying to connect to SQLite database with Ecplise but I have some errors. This is my Java code and the errors that I get on output. Please see if you can help me. Thank you in advance. package jdb; import java.sql.*; public class Test { public static void main(String[] args) throws Exception { Class.forName("org.sqlite.JDBC"); Connection conn = DriverManager.getConnection("jdbc:sqlite:/Applications/MAMP/db/sqlite/test.sqlite"); Statement stat = conn.createStatement(); stat.executeUpdate("drop table if exists people;"); stat.executeUpdate("create table people (name, occupation);"); PreparedStatement prep = conn.prepareStatement( "insert into people values (?, ?);"); prep.setString(1, "Gandhi"); prep.setString(2, "politics"); prep.addBatch(); prep.setString(1, "Turing"); prep.setString(2, "computers"); prep.addBatch(); prep.setString(1, "Wittgenstein"); prep.setString(2, "smartypants"); prep.addBatch(); conn.setAutoCommit(false); prep.executeBatch(); conn.setAutoCommit(true); ResultSet rs = stat.executeQuery("select * from people;"); while (rs.next()) { System.out.println("name = " + rs.getString("name")); System.out.println("job = " + rs.getString("occupation")); } rs.close(); conn.close(); } } ans that what I get in Ecplise : Exception in thread "main" java.lang.ClassNotFoundException: org.sqlite.JDBC at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:315) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:330) at java.lang.ClassLoader.loadClass(ClassLoader.java:250) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:398) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at jdb.Test.main(Test.java:7) Thank you

    Read the article

  • FreeNAS - how to "Exclude from file" in Rsyncd (GUI)

    - by user179181
    I am trying to set rsync tasks to Pull user profiles from 11 Windows machines running DeltaCopy Server and then configure ZFS periodic snapshot tasks for a backup solution. So far this has been working fine, although i would like to exclude certain file types like .DAT or NTUSER.DAT. My Exclusion file resides on the local ZFS Dataset (Receiving side) and is as follows: Temp Temporary Internet Files NTUSER.DAT NTUSER.DAT.LOG *.dat *.tmp *.DAT.log *.ost *.pst The command i typed under Auxiliary Parameters (Rsyncd Global Conf under services)is as follows: exclude from = /mnt/Storage/User_Profiles/exclude.txt Ive tried deleting the .DAT files from the receiving end and just as i start to get excited i click refresh and there they are again

    Read the article

  • Why is ntpd not updating the time on my server?

    - by John
    I have ntpd running on my server. It's all the default settings, except I commented out its ability to be a server to other machines: # restrict -4 default kod notrap nomodify nopeer noquery # restrict -6 default kod notrap nomodify nopeer noquery restrict default ignore If I run ntpdate -q ntp.ubuntu.com, I'm told that my machine's clock is off by 7 seconds. What's going on? How can I diagnose what's happening, is there a log I can turn on? more info #1 # ntpq -np remote refid st t when poll reach delay offset jitter ============================================================================== 91.189.94.4 193.79.237.14 2 u 30 64 7 108.518 -0.136 0.361 more info #2 Here's what this looked like when I asked the question: # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 7.191308, delay 0.13310 10 Jan 20:38:09 ntpdate[31055]: step time server 91.189.94.4 offset 7.191308 sec And here's what it looks like now, after restarting ntpd a couple times (I'm assuming that's what fixed it): # ntpdate -q ntp.ubuntu.com server 91.189.94.4, stratum 2, offset 0.000112, delay 0.13164 10 Jan 20:47:03 ntpdate[31419]: adjust time server 91.189.94.4 offset 0.000112 sec

    Read the article

  • Access points fighting for dominance?

    - by Phillip Oldham
    We have a small office with a large number of wireless devices (a mixture of desktop machines, laptops, and wifi-enabled phones) all working from a single Apple Airport Extreme which extends our wired network. I've added another Airport Extreme for resiliency, since we've been seeing a decrease in performance and (as far as I understand) access points can only handle a small number of clients. I set the new AP to extend the current network so that the clients weren't constantly switching between different wireless networks, however as soon as this AP was configured all the wireless devices started seeing network trouble, flicking on and off. I'm assuming that this is because both APs are reasonably strong, and the client can't decide which to use. What is the best route to follow to resolve this? What I'm aiming for is wireless resiliency; preferably having two APs share the network load, or if this isn't an option then having a primary AP with a "fail-over", should the primary go down for any reason.

    Read the article

  • Giving a scanner-printer-combo a zoom function when copying ?

    - by ldigas
    You know how everytime you go to a photocopying shop, photocopiers always have a neat zoom function (it can take whatever you give it, and zoom it in/out, so your copy comes out smaller or larger). I got one of those neat 3-in-one printer machines. It has a copy button on it, but it also has some software that comes with it (Epson SX115 to be exact is the model). Apart from going into some photo manipulation application, is there some way (software) to give it that feature. So, in short, I need something that can scan a page, scale it to let's say, a quarter of its size, and then print it out ? Anyone knows of anything like that ?

    Read the article

  • Any good method for mounting Hadoop HDFS from another system?

    - by Beel
    I want to mount the Cloudera Hadoop as a Linux file system over the LAN. As a setup, I already have the hadoop cluster running on a set of Ubuntu machines. But now I need to be able to use it as a normal file system from a Fedora system over the LAN. I tried FUSe but two things: 1. Cloudera says FUSE loses data (click here for that comment by a Cloudera employee on the official Cloudera support site) 2. I've had no success making it work the way we want As a point of clarification, I am using Hadoop ONLY for the file system, not for its other capabilities.

    Read the article

  • 'server was unable to allocate from the system nonpaged pool' error

    - by cop1152
    Currently users at my company are unable to access their home folder (or any folder on our domain controller). I am working remotely and cant physically access the machine until tomorrow morning, but I can access it with RDC. When I attempt to ping other machines on our network from the machine in question (connected with RDC), I immediately get the error PING: transmit failed. error code 1450. Event Viewer is full of 2019 errors: The server was unable to allocate from the system nonpaged pool because the pool was empty Does anyone have experience with this specific issue? I have found some info with Google, but would like to hear what the pros at SF have top say. This is a Windows 2000 server.

    Read the article

  • Hyper-V management remotely

    - by Péter
    I'll tell you in advance that I'm newbie in the topic. I have a Win8 (Home) machine with Hyper-V installed behind a router. The router has a public IP and a domain attached. I have another Win8 (Work) machine also installed Hyper-V. I want to access to my home Hyper-V via Hyper-V Manager so I can manage my virtual machines from work. I found this article but I don't know if it's applicable to me. I thought that a simple port forwarding should work and I only need to do is grant the Work HV manager my domain and the port I choose and if it's pop a login form I only need to fill the user data of my Home computer? How can I solve this? My thoughts revolve around: - Port forwarding - set domain+port and set my home user - Set up a VPN and use the local ip address of my home computer (it looks like a little cumbersome and my router only support PPTP) I'm open to any other solution too. Thanks, Péter

    Read the article

  • How to get LAN ip to a variable in a Windows batch file

    - by Ville Koskinen
    I'm streaming audio from my Windows 7 laptop to a sound card attached to a router. I have a little batch script to start streaming. REM Kill any instances of vlc taskkill /im vlc.exe "c:\Program Files\VideoLAN\VLC\vlc.exe" <parameters to start http streaming> REM Wait for vlc TIMEOUT /T 10 REM start playback on router plink -ssh [email protected] -pw password killall -9 madplay plink -ssh [email protected] -pw password wget -q -O - http://192.1.159:8080/audio | madplay -Q --no-tty-control - & As you see the http stream is hard coded. It would be nice to get the address dynamically to reuse the script on other machines. Any ideas?

    Read the article

  • What emulator / VM software can I use to create a Win32-portable Linux Guest?

    - by Jotham
    Hi, I want to create a portable VM setup so that I can boot a Linux install regardless of which Windows XP / Windows 7 host machine I am on. I was looking at Qemu but it doesn't appear to have a relatively safe win32 build. Other things like VirtualBox require complete install on the host OS for performance reasons. I'm not so concerned about performance, I just want to run a few curses based applications. My ideal end goal would be a a memory stick of some size with a VM/Emulator I can boot on most WinXP/Windows 7 machines and access my own curses based applications (probably archlinux or debian). Any help would be appreciated. Regards,

    Read the article

  • Can't login to SQL Server after moving machine to different office/domain

    - by Dan
    Our company has just been bought and the over the weekend I have brought up the last few machines to plug into their network (they are under a different Windows Domain). The last machine is our Vault system and the SQL Server was using Windows Authentication. I have plugged it into their network and its working fine, but i cannot connect to SQL Server with Management Studio and, I fear, no backup jobs will also be working. When I try to login under Windows Auth, it has the user name of "NEWDOMAIN\Administrator" (greyed out) and then presents a "login failed" message with error code "18456". Can anyone help me with this, or will I just have to reinstall SQL Server, Vault and restore the backup I took before the move?

    Read the article

  • How do you apply development practices like version control, testing and continuous integration/deployment to system administration?

    - by arex1337
    Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production. If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary. Generally, or specifically, how do you achieve this in system administration? (The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • organizing images by resolution with batch files

    - by Anthony
    Doing some digging I'm trying to figure out a command line solution for organizing very large archives of images based on their resolution into folders, 1920x1080, 1600x1200, 1600x900, etc. I've come across a few post on Superuser mentioning something called ImageMagick, is that the best method to the madness I'm trying to accomplish? I've never used any command line functions/applets/tools other then those that come from Microsoft. I'm rather new to command line usage but ive been enjoying the hell out of it using Powershell, xcopy and robocopy. I am slowly trying to push myself further into the Linux world with Ubuntu running on one of my physical machines as well as a virtual machine so that's an option as well.

    Read the article

  • Outlook connected to exchange does not send email

    - by Thomas de Nooij
    I have multiple machines with Outlook 2010 connected to RackSpace hosted exchange server. Everything works fine, but emails send after a while since outlook started will not leave the outbox. Clicking Send & Receive will display the progress bar at 100% completed with no errors, but is not really finishing. The Cancel All button is still active. The emails in the outbox are bold & italic, so ready to be send. When I close outlook and start it again, the mails are sent immediately without problems. I have tried the following: Checked if there are any third party addins: only Microsoft Add ins Checked if the virus scanner is blocking anything, but McAfee is not doing this. Checked and Repaired the .ost file Increased the server time-out from 30 seconds to 60 seconds Nothing helped. Any suggestions?

    Read the article

  • Browser-based Operating System

    - by Ross Peoples
    I have a bunch of touchscreen machines that I want to display a webpage on and have users interact with the webpage via the touchscreen. Right now, this is done with a full-blown OS with a browser set to run at startup. I think maybe the ideal solution is to use a Linux-based OS that boots up, starts X, then starts a web browser (Chrome, Firefox, or whatever) in full screen mode. What kind of options do I have? I really want to avoid using a full-blown OS like I do now. It looks unprofessional and takes a while to boot up. I was thinking maybe Chrome OS or something, but I wouldn't know how to set it up for my purposes, since it's still designed to be used as a desktop OS instead of a kiosk-type OS.

    Read the article

  • TeamViewer: How to disable color dithering for low-bit-depth screen settings

    - by gogowitsch
    I am using Terminal Services and TeamViewer a lot to access other computers, partly over slow networks. The problem described below is not affected by which of the two remote access services I am using. When accessing Windows 7 Professional machines, a great deal of text is hard to read as the background is dithered. Even for exactly the same colors, Windows 2003 does not seem to dither at all, but to choose the closest available color. I strongly prefer the latter, as I don't care for the exact colors, I just want to be able to read easily. I am not sure whether this is operating system-related. The programs on the remote systems do not allow me to change the color choices for the various backgrounds to anything sane. Is there a way to disable this color dithering using some target operating system setting that will do the trick for both Terminal Services and TeamViewer?

    Read the article

  • Router loses connection to internet randomly

    - by tvanover
    I have a Belkin wireless G plus Mimo router and it randomly loses the connection to the internet. It seems to happen for a second or two, not enough to impact day to day browsing. But enough to disconnect games, or stall videos I am watching. It happens whether I am plugged into the router or connecting via wifi. I am unsure if it is my router or Comcast that is causing this issue. If I connect a computer directly to the modem the problem never occurs, so that tells me it is probably the router. But I also had a Netgear wireless router WGR614, and it had the same problem. I don't know where to start looking for the source of the problem. What logs should I be combing through? All my machines are win 7 of varying versions.

    Read the article

  • mounts aren't case-sensitive

    - by Asi
    I mounted a few drives from Linux boxes in my network, but those mounts aren't case-sensitive. The mount command I used ( from the man mount.cifs, case-sensitive should be the default ): mount //10.0.1.10/remote_folder /local_folder -t cifs -o username=xxxx,password=xxxx but those mounts aren't sensitive. for example doing: ls -l /local_folder/testfile.txt ls -l /local_folder/TESTFILE.TXT give's the same result... instead of 'file not found' Couple of important points: All drives are running on Linux machines. My local machine is running Fedora 18 and it is case-sensitive for ANY folder/file expect the mounted drives. All drive/mounts are case-sensitive when when doing SSH. So if I SSH from my local machine to a remote machine, doing ls -l /local_folder/TESTFILE.TXT will say file not found as it should. So I believe the issue is in my local machine and not in the way I did the mount. but I'm not sure where to look next (I'm new to Linux)

    Read the article

  • Link bonding across multiple switches?

    - by Bryan Agee
    I've read up a little bit on bonding nics with ifenslave; what I'm having trouble understanding is whether there is special configuration needed in order to split the bonds across two switches. For example, if I have several servers that all have two nics each, and two separate switches, do I just configure the bonds and plug 1 nic from each into switch #1 and the other from each into switch #2? or is there more to it than that? If the bonds are active-backup, will a nic failure on single machine mean that server may become disconnected since the rest of the machines are using the primary nic and it's using the secondary? Or do you link the switches with one cable as well?

    Read the article

  • The canonical "blocking BitTorrent" question

    - by Aphex5
    How can one block, or severely slow down, BitTorrent and similar peer-to-peer (P2P) services on one's small home/office network? In searching Server Fault I wasn't able to find a question that served as a rallying point for the best technical ideas on this. The existing questions are all about specific situations, and the dominant answers are social/legal in nature. Those are valid approaches, but a purely technical discussion would be useful to a lot of people, I suspect. Let's assume that you don't have access to the machines on the network. With encryption use increasing in P2P traffic, it seems like stateful packet inspection is becoming a less workable solution. One idea that seems to make sense to me is simply throttling down heavy users by IP, regardless of what they're sending or receiving -- but it doesn't seem many routers support that functionality at the moment. What's your preferred method to throttle P2P/BitTorrent traffic? My apologies if this is a dupe.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >