Search Results

Search found 18475 results on 739 pages for 'auth to local'.

Page 215/739 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • problems using evolution Contacts with an DavMail LDAP Proxy for an Exchange server

    - by WegDamit
    i have an davmail proxy setup for accessing an Exchnage 200x server. eMail works fine in Thunderbird and Evolution (IMAP...) LDAP Contacts/Address Book works in TB, but not on Evolution. It seems that Evolution does not try the given credentials. The entered LDAP Auth is never send to the DaVMail Proxy. anonymous access to ou=people forbidden davmail.ui.tray.DavGatewayTray.displayMessage(DavGatewayTray.java:96) It the same conf for TB and in Evolution so i looks like an issue with Evolution to me. Does it take some different cponfig than TB for the credentials? Anybody got this conf workin an can give me some hints? Thanks, WegDamit

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Remote Desktop doesn't recognize username change

    - by Unsigned
    There are two active user accounts on the Windows 7 Professional server, Owner, and Guest. Owner is an Administrator with a password. Guest is the default Guest account with no password, but has been added to Remote Desktop Users. When attempting to connect to the server via a Windows 7 Professional client, Guest accepts RD connections fine, however, Owner throws an error "Unable to connect to Local Security Authority." I created a new Administrator account, named Remote, with the same password as Owner. Remote Desktop worked perfectly. I then deleted Owner, and renamed Remote to Owner. Now, Remote Desktop gives the same error ("Unable to connect to Local Security Authority") when attempting to log into the new Owner. However, attempting to log into Remote (even though it was renamed to Owner), works. Completely at a loss here, what is going on? Why won't Owner work, and why does Remote Desktop still use the old name on the renamed account?

    Read the article

  • Our Server Rooted but exploit doesnt work?

    - by Salina Odelva
    Hi everyone. My friend's hosting server got rooted and we have traced some of attacker's commands.. We've found some exploits under /tmp/.idc directory.. We've disconnected the server and are now testing some local kernel exploits that the attacker tried on our server. Here is our kernel version: 2.4.21-4.ELsmp #1 SMP We think that he got root access by the modified uselib() local root exploit but the exploit doesn't work! loki@danaria {/tmp}# ./mail -l ./lib [+] SLAB cleanup child 1 VMAs 32768 The exploit hangs like this.. I've waited over 5 minutes but nothing has happened. I've also tried other exploits but they didn't work.. Any ideas? or experimentations with this exploit? Because we need to find the issue and patch our kernel but we can't understand how he used this exploit to get root... Thanks

    Read the article

  • Virtual DNS recommended setup...

    - by luison
    Hi. We are new to virtualization which we are setting up with Proxmox VE (OpenVZ + KVM). I am a bit lost about the recommended DNS forwarder config specially in the OpenVZ (Virtuosso type) of enviroiment. Our intention was to have a small dnsmasq running in one of the VM acting as backup DHCP server and serving our in-office local addresses (and PCs) by an additional resolve.conf file which dnsmasq supports, but I've read that all VM should share DNS pointing to the host machine in which case it would make more sense having it there. My problem is that I would like to have as least as possible apps in the host so a reinstall of the environment (porxmox ve) and a machine restore can be as quick as possible. Does anyone have a similar setup? Does it make sense to have the 1st virtual machine running the local dns forwarder? Also... dnsmasq seems to want to have root permissions when running on an OpenVZ container... are there any work arrounds anyone knows for that.

    Read the article

  • Database Replication check script not running

    - by Tarun
    I'm trying to create a Database Replication checking script but I'm getting error while executing it. Here is the script #!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin export PATH #Server Name Server="Test Server" #My Sql Username and Password User=root Password="a" #Maximum Slave Time Delay Delay="60" #File Path to store error and email the same Log_File=/tmp/replicationcheck.txt #Email Settings Subject="$Server Replication Error" Sender_Name=TestServer Recipients="[email protected]" #Mail Alert Function mailalert(){ sendmail -F $Sender_Name -it <<END_MESSAGE To: $Recipients Subject: $Subject $Message_Replication_Error `/bin/cat $Log_File` END_MESSAGE } #Show Slave Status Show_Slave_Status=`echo "show slave status \G;" | mysql -u $User -p$Password 2>&1` #Getting list of queries in mysql $Show_Slave_Status | grep "Last_" > $Log_File #Check if slave running $Show_Slave_Status | grep "Slave_IO_Running: No" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is No." mailalert exit 1 else $Show_Slave_Status | grep "Slave_IO_Running: Connecting" if [ "$?" -eq "0" ]; then Message_Replication_Error="$Server Replication error please check. The Slave_IO_Running state is Connecting." mailalert exit 1 fi fi #Check if replication delayed Seconds_Behind_Master=$Show_Slave_Status | grep "Seconds_Behind_Master" | awk -F": " {' print $2 '} if [ "$Seconds_Behind_Master" -ge "$Delay" ]; then Message_Replication_Error="Replication Delayed by $Seconds_Behind_Master." mailalert else if [ "$Seconds_Behind_Master" = "NULL" ]; then Message_Replication_Error="$Server Replication error please check. The Seconds_Behind_Master state is NULL." mailalert fi fi

    Read the article

  • How to convert laptop drive for use as VMware image?

    - by jnman
    I have a windows laptop that recently died (dead motherboard). It being a 7 year old laptop, I decided to give Apple a try this time around and try to use VMware to access my old data if necessary. In order to do this, I need to convert the physical drive to a VMware image. Googling around, it looks like I might be able to use VMware Convertor to do this. My original intent was to plug the laptop drive into a windows desktop via an external USB enclosure and create the image that way. However, upon further investigation, it looks like VMware Converter only supports converting a local machine (the desktop) or a remote machine (via IP) but not a laptop drive plugged into the local machine. So with that in mind, I'm looking for suggestions and help on how to convert this laptop drive into something I can use on my new Macbook Pro.

    Read the article

  • Trouble joining Windows Server 2008 to Domain

    - by Jim R
    When I try to join my new server to my existing domain I get the following error: "An attempt to resolve the DNS name of a DC in the domain being joined has failed. Please verify this client is configured to reach a DNS server that can resove DNS names in the target domain." I have tried all of the following already: Successfully pinged the domain controller. Ping the new server from the domain controller by IP address and by DNS name. Ping the DC server from the new server by IP address and by DNS name. Changed the network to DHCP (it was originally static). No joy as static or DHCP. Turned off all firewall settings. Added the domain name to 'hosts' file. Added the server name of the primary domain controller to the 'hosts' file in the new server. Any ideas? Thanks in advance for any help! Jim Update: With help from J. Brian Kelly (Thanks) I have managed to narrow down the problem to a DNS issue. Specifically, UDP/53 packets are being sent (they are seen in Network Monitor), but are not getting to the DNS server. But, I do not yet know why. Update: The quested output from IPCONFIG for the HyperV host and the virtual machine. IPCONFIG from HyperV Server Windows IP Configuration Host Name . . . . . . . . . . . . : HYPER Primary Dns Suffix . . . . . . . : sfi-wfc.com Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi-wfc.com Ethernet adapter Local Area Connection 4: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Primary Network Physical Address. . . . . . . . . : 00-30-48-CA-CC-7A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::cd16:3ac2:3d4f:e275%679(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.100.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.100.10 DHCPv6 IAID . . . . . . . . . . . : -1476382648 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-12-10-20-E9-00-30-48-CA-CC-7A DNS Servers . . . . . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 3: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Intel(R) 82576 Gigabit Dual Port Network Connection #2 Physical Address. . . . . . . . . : 00-30-48-CA-CC-7B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPCONFIG from Virtual Machine Windows IP Configuration Host Name . . . . . . . . . . . . : DB Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : sfi Ethernet adapter Local Area Connection 2: Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-66-03-02 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.100.128(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Saturday, August 29, 2009 10:44:45 AM Lease Expires . . . . . . . . . . : Tuesday, September 01, 2009 3:08:33 PM Default Gateway . . . . . . . . . : 192.168.100.10 DHCP Server . . . . . . . . . . . : 192.168.100.5 DNS Servers . . . . . . . . . . . : 192.168.102.5 Primary WINS Server . . . . . . . : 192.168.100.5 NetBIOS over Tcpip. . . . . . . . : Enabled Tunnel adapter Local Area Connection* 8: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : sfi Description . . . . . . . . . . . : isatap.sfi Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 02-00-54-55-4E-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes

    Read the article

  • MySQL Workbench sends computer name with login not IP

    - by Android Addict
    I am attempting to connect MySQLWorkbench to a remote MySQL Server. The server has granted access to user@IPAddress However, when I try to connect MySQLWorkbench, it sends user@computername instead. How do I configure the connection to use the IP address instead in MySQLWorkbench? Reference: The remote server is on the local network, so I need to use the local IP address assigned to my client. EDIT What I have tried so far: from the server: mysql -u user@IPAddress -p --host=(ServerIPAddress) Returns: mysql> So that tells me the user account is operational. Furthermore, I confirmed it exists using: select user from mysql.user; returning a table of all users, of which the user I am using is present. I have also opened the port 3306: sbin/iptables -A INPUT -i eth0 -s clientIPAddress -p tcp --destination-port3306 -j ACCEPT Still I encounter Access Denied

    Read the article

  • how to diagnosis and resolve: /usr/lib64/libz.so.1: no version information available

    - by matchew
    I had a hell of a time installing lxml for python2.7 on centOs5.6. For some background, python2.7 is an alternative installation of python on centOS5.6 which comes with python2.4 installed. it was bulit from source per its instrucitons ./configure make make altinstall However, after about 20 hours of trying I managed to find a workable solution and was able to install lxml. Until, I notice the following error at the top of the interpreter: python2.7: /usr/lib64/libz.so.1: no version information available (required by python2.7) Python 2.7.2 (default, Jun 30 2011, 18:55:26) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print 'Sheeeeut!' this error is printed out everytime I run a script. For example: $ ./test.py /usr/local/bin/python2.7: /usr/lib64/libz.so.1: no version information available (required by /usr/local/bin/python2.7) the script runs flawlessly, but this error is bothersome. After some digging I have seem to believe I have a wrong version of libz installed, that it is either an older version or built for a different platform. I'm not quite sure how, I've only installed libz through yum, as far as I know. Although, I can't quite remember every little thing I tried in my twenty hours of trying. You may also be intereted in what my lib64 folder looks like, here is some information $ ls -ltrh libz* -rwxr-xr-x 1 root root 84K Jan 9 2007 libz.so.1.2.3 -rwxr-xr-x 1 root root 107K Jan 9 2007 libz.a -rwxr-xr-x 1 root root 154K Feb 22 23:30 libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Apr 20 20:46 libz.so.1 -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jun 30 18:43 libzdb.so.7 -> libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Jul 1 11:35 libz.so -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jul 1 11:35 libzdb.so -> libzdb.so.7.0.2 notice: the items that Say Jul 1st or Jun 30th are from me. I had initially moved these files into a backup folder as they seeemed to be 1. duplicates and 2. had a date after/during my problems I alluded to earlier that I had with lxml One inclination is to completely remove python2.7 and re-install. I think having it install to /usr/local/ was a poor default choice. However, without the make uninstall option being present it seems to be a time consuming task for a solution I am not quite sure would solve my problem.

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • Add a remote printer over ssh on OSX?

    - by GradGuy
    I have a printer at my office that is connected to a local network and my linux box at work can see it on the network. However, it is not visible to the outside world. I was trying to figure out a way to add it on my MacAir and so far have found two options: 1) Using ssh tunnel via CLI: cat file.pdf | ssh user@linuxbox lpr. 2) With Chrome installed on the linux box, using the Google Cloud Print service on the remote box and automator on my MacAir I can add the printer to Cmnd+p dialog box I like the first method since it does not require Chrome be installed and the second one since it allows to use Cmnd+p inside all applications. I was wondering if there is a way to combine by using automator to run the first command line script. What about port forwarding? Is it possible to forward the remote CUPS 631 port to a local port and then add the printer normally? What other methods would you recommend?

    Read the article

  • A specific user is unable to log in to vsftpd

    - by HackToHell
    I am setting up a new user let his name be ftpguy. He has access to only one directory /var/www/xxx. I have already chowned the directory so that he has write and read privileges. The user is also unable to login via ssh as I have disabled that by changing his shell to /sbin/nologin. Also, in vsftpd config, I have enabled the chroot_local_user. Now whenever I log in from ftp, i get an auth error. Connect socket #1008 to xxxxxxxx, port 21... 220 Welcome to blah FTP service. USER ftpguy 331 Please specify the password. PASS **** 530 Login incorrect. I changed the password to something different several times, using the passwd command, nothing happens, i still the above error. However I am able to log in with my ssh creditals to my ftp server without any problems.(I do not use a key).

    Read the article

  • How to enabled pdo mysql in Centos 5.8

    - by nacho3d
    I have a VPS with Centos 5.8 In phpinfo displays: './configure' '--disable-fileinfo' '--disable-pdo' '--enable-bcmath' '--enable-calendar' '--enable-ftp' '--enable-libxml' '--enable-magic-quotes' '--enable-sockets' '--prefix=/usr/local' '--with-apxs2=/usr/local/apache/bin/apxs' '--with-curl=/opt/curlssl/' '--with-imap=/opt/php_with_imap_client/' '--with-imap-ssl=/usr' '--with-kerberos' '--with-libdir=lib64' '--with-libxml-dir=/opt/xml2/' '--with-mysql=/usr' '--with-mysql-sock=/var/lib/mysql/mysql.sock' '--with-openssl=/usr' '--with-openssl-dir=/usr' '--with-pcre-regex=/opt/pcre' '--with-pic' '--with-zlib' '--with-zlib-dir=/usr' I've tried this: http://www.host1free.com/forum/vps-technical-support/7248-tutoria-centos-apache-webserver-mysql-php-eaccelerator-apc.html And aparently it installed php-pdo # rpm -qa |grep php php-5.3.13-1.el5.remi php-xml-5.3.13-1.el5.remi php-common-5.3.13-1.el5.remi php-cli-5.3.13-1.el5.remi php-pdo-5.3.13-1.el5.remi php-xmlrpc-5.3.13-1.el5.remi php-mcrypt-5.3.13-1.el5.remi But I've restarted apache and it still says in my phpinfo: '--disable-pdo' Should I rebuild php? Do I need to do some other step?

    Read the article

  • Squirrelmail got error after installation whm

    - by Pleerock
    Just now installed whm/cpanel, created some accounts, created mail account in one of them and entered squirrelmail to check the mail. Unfortunately it gives me an error: Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: Name or service not known in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 Warning: fsockopen() [function.fsockopen]: unable to connect to localhost:143 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 How do I fix it? I don't know exactly, but I read some sites and maybe the problem in dns? I changed ns1.com to ns1.myhost.com in a Basic cPanel & WHM Setup P.S. Im sure that it server configuration problem, not squirrelmail, other mail clients are not working too..

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

  • Unity3d: calculate the result of a transform without modifying transform object itself

    - by Heisenbug
    I'm in the following situation: I need to move an object in some way, basically rotating it around its parent local position, or translating it in its parent local space (I know how to do this). The amount of rotation and translation is know at runtime (it depends on several factors, the speed of the object, enviroment factors, etc..). The problem is the following: I can perform this transformation only if the result position of the transformed object fit some criterias. An example could be this: the distance between the position before and after the transformation must be less than a given threshold. (Actually the conditions could be several and more complex) The problem is that if I use Transform.Rotate and Transform.Translate methods of my GameObject, I will loose the original Transform values. I think I can't copy the original Transform using instantiate for performance issues. How can I perform such a task? I think I have more or less 2 possibilities: First Don't modify the GameObject position through Transform. Calculate which will be the position after the transform. If the position is legal, modify transform through Translate and Rotate methods Second Store the original transform someway. Transform the object using Translate and Rotate. If the transformed position is illegal, restore the original one.

    Read the article

  • How to move Mailboxes over from old Exchange 2007 to new EBS 2008 network?

    - by Qwerty
    This q is similar to: http://serverfault.com/questions/39070/how-to-move-exchange-2003-mailbox-or-store-from-2003-to-2007-on-separate-networks Basically I am trying to move our exchange mailboxes over to a test domain that is hosting EBS2008 with Exchange 2007. We plan to move as soon as we can when we have our exchange data over. I have tried moving a db with mailboxes over but cannot get it to mount in the new Exchange in any way possible, including mounting it onto a recovery store. From my understanding the ONLY prerequisite for moving Exchange DBs across is that it must have the same Organizational name (unlike previous versions of Exchange). If anyone has any insight as to why I cannot mount and simply reattach the mailboxes, please give me an idea as to what could be wrong. It should be as simple as this. Note that the DBs I have are in a clean state. I cannot use ExMerge because I am not running any mailboxes on 2003. I have also tried using a 32bit Vista machine with the Export-Mailbox cmdlet to extract mailboxes but anything I do to it results in Permission errors. I have tried to troubleshoot these with no success. I am running in full admin with proper exchange roles and yet it still gives me access denied errors: Export-Mailbox : MapiExceptionNetworkError: Unable to make admin interface conn ection to server. (hr=0x80040115, ec=-2147221227) Also some errors show in the management console: get-MailboxDatabase Completed Warning: ERROR: Could not connect to the Microsoft Exchange Information Store service on server TATOOINE.baytech.local. One of the following problems may be occurring: 1- The Microsoft Exchange Information Store service is not running. 2- There is no network connectivity to server TATOOINE.baytech.local. 3- You do not have sufficient permissions to perform this command. The following permissions are required to perform this command: Exchange View-Only Administrator and local administrators group for the target server. 4- Credentials have been cached for an unpriviledged user. Try removing the entry for this server from Stored User Names and Passwords. Why I have to use a 32bit machine to export a simple .pst file is beyond me... So yeah I am now out of ideas and any help would be great! Thanks in advance.

    Read the article

  • Networking not working in Windows 7 - "The account specified for this service is different from the account specified for other services"

    - by tog22
    I have a homebuilt computer with a GA-Z68MA-D2H-B3 motherboard with a Realtek RTL8111E LAN chip. Ethernet was working fine in Windows 7 until I reinstalled this OS, but now it's stopped, while still working in other OSes. I've tried reinstalling drivers from both the Gigabyte and Realtek sites to no avail; I've also plugged in and installed an Asus USB-N13 wifi dongle and this doesn't work oddly. How can I diagnose and fix this issue? In 'Network and Sharing Center' says "The account specified for this service is different from the account specified for other services running in the same process" under the heading 'Unknown'. Following the advice at http://www.sevenforums.com/network-sharing/130159-dependency-service-group-failed-start.html#post1122627 I've gone to 'Control Panel= Admin Tools= Services' and ensured the services listed at that URL are started (one - IIRC 'Come+ Event System' refused to start as user 'Local services' so I've had to set it to log on with the 'Local system account'... I suspect this may be part of the problem).

    Read the article

  • Running PHP 5.2 FastCGI + Apache on CentOS 5 issue

    - by Goran
    I am trying to setup 2 versions of PHP on Centos 5.9 using this tutorial: http://linuxplayer.org/2011/05/intall-multiple-version-of-php-on-one-server. I have followed I have installed default 5.4.19, and I was trying to setup another 5.2.17 PHP version to be run with Fast CGI and I followed the second part completely. However, when I try to run http://web2.example.com it returns 500 error message. In the apache log there are only 2 lines that repeat: [notice] mod_fcgid: call /var/www/web2/index.php with wrapper /usr/local/php52/bin/fcgiwrapper.sh and [notice] mod_fcgid: process /var/www/web2/index.php(25250) exit(server exited), terminated by calling exit(), return code: 255 Please note that I had to add .php at the and of the FCGIWrapper because apache would not start without it: FCGIWrapper /usr/local/php52/bin/fcgiwrapper.sh .php Also please note that http://web1.example.com with PHP 5.4.19 is working absolutely fine. Please help. Thank you very much in advance.

    Read the article

  • How can I run this script on startup, restart, and shutdown?

    - by Exeleration-G
    I'm using Ubuntu 11.10. I've written a script, that synchronises a directory in ~ with a directory on /dev/sda4, using Unison. Before, I had this script running every five minutes with no problems, using crontab. Right now, I want to execute this script at startup, restart and shutdown only. This is what the script looks like: #!/bin/bash unison -perms 0 -batch "/mnt/Data/Syncfolder/" "/home/myname/Syncfolder/" My crontab configuration was as follows: m h dom mon dow command 0,5,10,15,20,25,30,35,40,45,50,55 * * * * sh /usr/local/bin/s4lj.bash Note that I copied the script from ~ to /usr/local/bin/ first, to avoid root problems. I've read How to execute script on shutdown? and How to write an init script that will execute an existing start script?. After doing that, I've done this: I've made s4lj.bash executable, and then copied it to /etc/init.d/. For startup, I've made a symlink in /etc/rc2.d/ to /etc/init.d/s4lj.bash, and renamed it to S70s4lj.bash. For restart, I've made a symlink in /etc/rc6.d/ to /etc/init.d/s4lj.bash, and renamed it to K70s4lj.bash. For shutdown, I've made a symlink in /etc/rc0.d/ to /etc/init.d/s4lj.bash, and renamed it to K70s4lj.bash. Still, the script won't be run in any of these situations. How can I make the script get executed? I'd be happiest with a proper *.conf file in /etc/init. Thanks in advance.

    Read the article

  • Hostname problems in CentOS 5.5

    - by spoon16
    I just set up a CentOS 5.5 machine on my local network and attempted to modify the hostname by editing /etc/sysconfig/network file. When I'm logged in locally the change to the hostname is reflected and seems to be working fine. When I open a SSH session via PuTTY from Windows this is what I see at the prompt: [root@? ~]# cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=mini.local [root@? ~]# sysctl kernel.hostname kernel.hostname = ? [root@? ~]# hostname ? [root@? ~]# hostname -f hostname: Unknown server error A couple of other symptoms that may be helpful in troubleshooting this problem. I can ping the CentOS box from my Windows machine via IP but not hostname. Also, my Netgear router does not display the hostname when I view the "Connected Devices", I do see the mac address and the proper IP listed though. How can I make it so that the hostname is properly propagated throughout my network?

    Read the article

  • How to send ctrl+alt+del using Remote Desktop

    - by tmarshall
    I need to change the local admin password on a remote PC using a Remote Desktop Connection. I would normally do this by pressing ctrl+alt+del and selecting the change password option. But I can't send ctrl+alt+del using Remote Desktop since this "special" key series is always handled by the local client. How can I send ctrl+alt+del using Remote Desktop? Note: This question is not about how to change the password. I am aware of other ways to change the password. I am specifically asking how to send ctrl+alt+del - not how to change the password without sending those characters.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >