Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 647/836 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Buying an old laser printer -- what will need to be replaced?

    - by marienbad
    Hi all -- as you can see I'm new. I do IT and wiring for a small local shop but I never deal with printers. I do a LOT of printing, and I'd like to stop spending as much money on it. On my local CL, there is an HP 8100DN (duplex network) printer for a very good price (and the toner is a quarter-cent per page). It has printed 200,000 pages and I don't yet know anything else about it. The model was released in 1999. So my questions: What are the parts that tend to need service on laser printers? On ebay, I see fusers, rollers, DC power boards, and motors. What would you expect to replace soon at 200,000 pages? Are there any good "tests" to find out if certain parts are near failure? Do you have anything to say about the HP 8100 specifically? The bottom line for me is that if there's any chance of repairs costing more than $100, it's not worth it for me.

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • Disaster Recovery Standby Server

    - by user64300
    Hi, I work for a small business with 25 users and 2 servers. 1 server is the DC running Windows Server 2003/Exchange 2003. We want a reliable disaster recovery strategy for this server without having to spend a lot of money. We take regular backups but I have been advised that only an identical server will allow them to be restored easily. I'm trying to come up with a solution that means we don't have to buy two servers at twice the cost everytime we upgrade. I'm toying with the idea of upgrading our DC more frequently (say every 3 years) and then using the old server as the recovery server (temporarily - until we can source a replacement server). However, I won't know whether the backups will restore on the old server until I try it! We're planning to upgrade to Server 2008R2 in the near future so I'm hoping the backup tools will give me some success in restoring to different hardware (or perhaps I can use hyper-v if not). So what I am wondering is whether it is a idea to use old hardware as a disaster recovery strategy (providing we regular test it obviously!).

    Read the article

  • Searching for just files

    - by M Schenkel
    I have a couple questions about searching for files on Windows 7. I find the XP method much easier than this new Windows 7 search. Note: I am only concerned about finding files matching a search term, not ALL files containing the search term. Is there a way to search just for files? When I use the search it seems to be searching "within" files and returning instances where the name of the file is used. Example: I have a whole web directory and want to find the javascript files. But if I enter "myjavascript.js" in the search, it also returns all the html files which reference the javascript file. This is both slow and difficult to actually find the reference to the file. Is there a way to search for an exact match? The search seems to implicitly use wildcards. For instance, say I have a bunch of files in a folder: file1.txt,file11.txt, file12.txt, file13.txt. If I enter "file1.txt" in the searcher it returns instances as if I were using a wild card file1*.txt I miss XP!!!!

    Read the article

  • Apache Virtualhost entry with Windows hostname

    - by gshauger
    I have a Windows Domain Controller and we use it for DNS for our internal network. I have an Ubuntu box with an IP address of 172.16.34.149. Within the Windows DNS I created the forward and reverse lookup entries for the name Endymion. Naturally when ever I FTP/SSH/HTTP/etc to the hostname Endymion it resolves correctly to my Ubuntu box. I wanted to do some web development on this box for an existing site. There were problems when I placed the website in a subfolder of /var/www/. Let's just say it was in folder /var/www/projectx/. The issue involved the incorrect resolution of non-relative urls. So I figure I could create a new DNS entry for the hostname projectx. Sure enough when I FTP/SSH/HTTP/etc to the hostname projectx it takes me to the same ubuntu box as the hostname Endymion...this is what I would expect. I now have two hostnames for the same box. I then create a Virtualhost entry in httpd.conf that looks like the following: <VirtualHost *:80> DocumentRoot /var/www/projectx ServerName projectx ServerAlias projectx </VirtualHost> Sure enough when I go to a browser and type in http://projectx/ it takes me to the correct subfolder. Everything works!!! Not so fast. I then go to http://endymion/ and instead of taking me to /var/www/ it takes me to /var/www/projectx/ Clearly I'm missing something. Help please! ;)

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Unwanted forced authentication after server restart (Win 2k3)

    - by Felthragar
    We're running a Win 2k3 R2 Standard 64-bit edition server. On this server we're running a fileserver and the ability to allow remote login to our network through vpn. We do not currently utilize a domain setup, all user accounts are local accounts on the server. Each employee is given a unique account to login to the server. The password is a randomly generated 16 character long string, which makes it hard to remember. What we've done is basicly had the password stored on the client machine (standard "Remember Me" functionality). This has worked well. However, last night our server automatically restarted after an automatic update. After that, some of our employees, myself included, had to re-authenticate with the server, submitting our credentials again. Then again, some others did not have to re-authenticate. Do you guys have any idea why this is? Is there a setting to prevent this? I've checked the logs but I couldn't find anything of interest. Then again I'm not really sure what I'm looking for. Thanks in advance, I'll try to answer any additional questions you may have. Edit: When I say "login" or "authenticate" I mean through the standard windows samba protocol. Edit 2: Ok, new day. Tonight the server restarted again, and the same two clients that had to re-authenticate yesterday had to re-authenticate today as well. The rest did not.

    Read the article

  • Network Sniffing and Hubs

    - by Chris_K
    This will likely seem naive to the experts... but it has been on my mind lately. For years I've been using ntop and a cheap 4 port hub to sniff client networks to determine who's doing what -- and how much. Great way to see what's going on when they call and say "Geeze, the network seems really slow today." No need to bring in a managed switch (or access the existing one) and no need to configure spanning or mirroring. I just drop in the hub inline where I want to measure. Lately I noticed it is just about impossible to buy a real honest-to-goodness hub anymore. While looking for a new one, I had someone tell me that I should be sure to get a full-duplex hub or I'd only be seeing half the traffic when I monitor. Really? I've been using a crusty old Netgear DS104 all this time. No clue if it is half or FD. Have I really been understating my measurements? I'm just not bright enough about the physical layer to really know... Side note: Just ordered a Dualcomm Ethernet Switch TAP as a hub replacement. Seems like a nifty gadget. Any notes or tips about it would be welcome in the comments :-)

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • Ways of file copy

    - by Tim
    I sometimes found that when using simple right-click and copy-and-paste, some files/directories are not copied completely or not at all, because of various reasons, such as some saved webpage files/directories have some strange characters in their names or their names are too long. For example, in Windows 7, I save this webpage http://www.howtogeek.com/howto/windows-vista/working-around-windows-vistas-shrink-volume-inadequacy-problems/ completely in a very deep directories whose parent directories may have long names, I cannot copy its top ancestry directory, as Windows complains the filename for the saved webpage directory is too long. In Ubuntu, sometimes I can save a file with some special character such as newline under some directory. But when I copy that directory, it will say the file name has some special character and I will have to manually remove the character. Such cases are complained in both Windows and Ubuntu. I was wondering what some better ways to accomplish the copy job in both Windows and Ubuntu. For example, will archiving all to be copied into a single archive help? If yes how to do that? Thanks and regards!

    Read the article

  • Is it possible to trace someone using Google during an online exam?

    - by George
    I happen to be a professor at a reputed college. I want to design an online exam for over 1000 students via around 50 computers right after the vacation ends. Now the problem is that I have heard that many students use Google on a different tab to find answers when no invigilator is around. I want to know if there is a way to backtrace it after the exams via some kind of history or any other possible way. In our university there is a standard system. I am not good with computers but I will try to explain. Each computer uses mozilla to connect to a server centrally located via an IP. The students open it and enter a unique ID and password to start the exams. Many questions are jumbled and different groups of students give exam in a different time slot. Is there any way to trace it since I want to set an example for students so they won't cheat and give exams in an honest way. Additional details: Since the number of computers are less than the number of students, more than 10 students are going to use a single computer on a single day over a period of 10 hours. After this, if I check the history (and let's say someone even forgot to delete the history and I see it), will I able to figure out who among the 10 has done it? Moreover, is it even practical and feasible?

    Read the article

  • Exchange Full Access issue

    - by Benjamin Jones
    I was just hired as a System Admin for a small company. They use Exchange 2010 for their Mail Server. I've never had a permission issue like this with Exchange because I worked for a larger firm with less responsibility before. Their old system admin is LONG GONE, so I can't ask him what he did. The issue: Right now ANYONE can gain access to a mailbox and view the mail in the mailbox. This is disabled by default you say and you have to grant them full access ? You are right, but the old System Admin I guess didn't know what he was doing. SO right now user A can open up user B mailbox with out being granted permission. So here is what I found out. Every user in EMC Full Access Permission has Exchange Server group granted. Within the Exchange Server Group, Domain User's is a Member Of. Within Domain User's all user's are listed as Members. So my guess is because of this all users can access ANY mailbox? Well GOOD News. The company is small (35 people) and they are not computer savvy, so hopefully no one has figured out they can open anyone's mailbox.(From what I can tell no). Next thing I did was with my domain user in EMC, delete Exchange Servers Group in FUll Access Permissions and grant access to my user. I made sure that my memeber was apart of the Exchange Server Group. Went to our OWA site and now I don't have permission to my own mailbox. Re did everything to the way it was with my user and now I'm stuck. Any help? I would think granting a single user that is in the Exchange Server group, Full Access to that mailbox would enable them to open that mailbox???? I guess I am wrong.

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • Big and reaaaally strange problem with a web server (host InMotion Hosting)

    - by altar
    Hi. I have a terrible problem that I have tried to solve since three days ago: I browse my own web site and after a while I cannot access the web site. AT ALL! I can only see a 501 error message: "Method Not Implemented. GET to / not supported. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request." Once I get that error the site is totally and permanently inaccessible in that browser!! Reboot, browser restarts, clear cache, clear all history and cookies, etc., are not working. I have reproduced it in 4 different computers. Three computers are in one city, the 4th is in another city. Two different ISP's also. One computer is Linux, the others are on Windows (XP and 2000). Browsers are FF 3 to FF3.5 and IE 8. The error is ALMOST reproducible on demand (for me at least). It appears when I browse the forum under certain circumstances. I don't know which are these circumstances, but if I browse it long enough (10 sec to 5 minutes) it eventually appears. Just to make it clear, once the error appears (while browsing the forum) then the whole web site become inaccessible, not only the forum! My host is not willing to help because they say they cannot reproduce the error. I sent screenoshots but they don't care. NEWS Reseting browser's settings from the 'Tools-Clear private data' din't worked. However! When I have cleared the same settings (more exaclty) cookies from the special menu that appears when you right click website's icon, it worked. So it was something related to a cookie BUT it manifests in all browsers (FF, IE, Opera). So it cannot be a browser related problem.

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • Web service for checking out / leasing a token

    - by JP Slavinsky
    I run a web site on AWS that has a number of web servers (say 4 of them) running behind a load balancer. For this particular web site, I have one license key of New Relic for doing instrumentation. At any one time, I only want one of the 4 web servers to be using the key. If that server goes offline, I want one of the remaining web servers to be able to begin using the license key. Does anyone know of a service that would let me manage this process? The service would not particularly need to store the key itself but rather just manage the fact that only one web server can lease out the right to use the key at any time. Something where the web servers would have to come back every few minutes and renew their lease, and if they don't it becomes available to someone else. I just realized I could maybe accomplish a hacked version of this using a file on S3, but that doesn't prevent race conditions / etc and is definitely hackish. Any thoughts welcome. FWIW, this site is built on Ruby on Rails. Thanks! JP

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • My laptop screen keeps dimming

    - by Rowland
    I have a Cryo laptop with Windows 7 installed, bought in December 2011. Sometimes the screen seems to persistently dim and/or brighten up, even as I am doing things. In fact the brightness is varying even as I type this. The battery is always fully charged and connected to the mains. I have checked many times the battery/power-saving settings always leaving the settings the same way; full brightness and never dimming when on the mains. Yet when the screen starts playing up I can end up with the screen dimming and brightening almost continuously. I once went to the "adjust screen brightness" window when the screen had dimmed. I found the brightness slider on 100% (as I expected) but, as I dragged it to the left, to dim the screen, it first brightened and then started dimming, i.e. the screen setting said 100% brightness but it was only at, maybe 80%. I have checked with Cryo and they just say check the power settings. I know what these are and how to work them and always set them to never dim/full brightness, yet still my laptop starts this dimming every so often.

    Read the article

  • Is this DVD drive broken? Brand new, i need help convincing

    - by acidzombie24
    I am asking bc i know dell is going to give me a problem. How do i know if my DVD is broken on my laptop? i burnt 4 DL disc and they ALL failed, i called and dell suggested roxio. I used it and burnt 1 disc without error and the 2nd disc with an error. With both apps there were no 'problems' during the burning process only failed on the verification process. Some of these bad disc dont work on other PCs and one locks up windows when i click a specific file. Does that sound like a broken burner to you guys? when i called dell they told me since it can read disc properly 100% of the time and software doesnt fail in the burning process its not a broken drive _. They forward me to software support who demand a fee (i think $100) to help me fix my software. I am annoyed bc i dont want to be on the phone for them to watch me burn a dvd and since i burned it once correctly i dont want to happen to burn correctly again to have them say they solved my problem (doing nothing) and charge me refusing to refund. -edit- The errors i got were 1) the request could not be performed because of an I/O device error 2) Windows locking up when opening 1 specific file 3) Cannot copy : Data error (crc) NOTE: the file that causes the problems are random every disc

    Read the article

  • if I define `my_domain`, postfix does not expand mail aliases

    - by Norky
    I have postfix v2.6.6 running on CentOS 6.3, hostname priest.ocsl.local (private, internal domain) with a number of aliases supportpeople: [email protected], [email protected], [email protected] requests: "|/opt/rt4/bin/rt-mailgate --queue 'general' --action correspond --url http://localhost/", supportpeople help: "|/opt/rt4/bin/rt-mailgate --queue 'help' --action correspond --url http://localhost/", supportpeople If I leave postfix with its default configuration, then the aliases are resolved correctly/as I expect, so that incoming mail to, say, [email protected] will be piped through the rt-mailgate mailgate command and also be delivered (via the mail server for ocsl.co.uk (a publicly resolvable domain)) to [email protected], user2, etc. The problem comes when I define mydomain = ocsl.co.uk in /etc/postfix/main.cf (with the intention that outgoing mail come from, for example, [email protected]). When I do this, postfix continues to run the piped command correctly, however it no longer expands the nested aliases as I expect: instead of trying to deliver to [email protected], user2 etc, it tries to send to [email protected], which does not exist on the upstream mail server and generates NDRs. postconf -n for the non-working configuration follows (the working configuration differs only by the "mydomain" line. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ocsl.co.uk newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 We did have things working as we expected/wanted previously on an older system running Sendmail.

    Read the article

  • php extensions & apache mods gone/not working after server restart?

    - by user1782359
    I was wondering if anyone has ever come across this before, as I'm pretty stumped to be honest, and my server admin knowledge isn't particular good so I'm not sure what could even be wrong, let alone how to fix it. Basically, Thursday last week everything was fine on our server. I come in on Friday and it's a mess: php extensions are missing/not working, apache modules are gone. (e.g. oci_* was gone completely, odbc_ not working but still there, the apache ntlm_auth for single sign on was gone and so the website wasn't even loading in IE). I'm ruling out anything deliberate because it's just incredibly unlikely. The only thing that really happened between thursday & friday is that on thursday evening one of the network guys did a RAM upgrade on the server and restarted it. That's it, nothing else. Now I'm wondering if somehow those extensions and such which we installed months ago were somehow only saved in a local memory of sorts, and a restart has wiped them? But we installed them all as root, so I don't see why it should be any different from installing anything else. It makes little/no sense to me. To expand on an example of something that's gone very wrong, the php odbc_ extension: It's still on the server, it doesn't return undefined function or anything. But it just cannot connect to the datasource any more. I've tested it through the command line and it's working perfectly fine with that datasource and login details, but all of a sudden having it in the php odbc_connect() function and it just can't connect. ( [S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source. ) But unixODBC is set up fine. Like I say i've tested it all through the terminal and it can connect, and we've not changed anything, it's just now all of a sudden not working through the PHP function. Anyone have any ideas whatsoever as to what could be going on? This is on CentOS 5.x by the way.

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >