Search Results

Search found 25214 results on 1009 pages for 'browser tools'.

Page 380/1009 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Login error in phpMyAdmin, problem setting auth_type in config.inc.php

    - by sergiom
    I'm having a problem accessing phpMyAdmin. A few weeks ago I did succeed configuring it for auth_type = 'cookie', but I still receive an error stating that I should have to set blowfish_secret. That was strange because it was set. So I changed auth_type from cookie to http, but it didn't work. I changed it back to cookie, but it doesn't work anymore. this is the error. phpMyAdmin - Error Cannot start session without errors, please check errors given in your PHP and/or webserver log file and configure your PHP installation properly. this is my C:\wamp\apps\phpmyadmin3.2.0.1\config.inc.php <?php /* Servers configuration */ $i = 0; /* Server: localhost [1] */ $i++; $cfg['Servers'][$i]['verbose'] = 'localhost'; $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['user'] = ''; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['Servers'][$i]['blowfish_secret'] = 'this is my passphrase'; /* End of servers configuration */ $cfg['DefaultLang'] = 'en-utf-8'; $cfg['ServerDefault'] = 1; $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; ?> I changed the blowfish_secret, since I don't remember the old one, and I deleted the cookies in my browser and restartd all wamp services and the browser. After I enter username and password in the login page I get the error. I've tried searching into the log files, but I'm a newbie and I'm not sure I've searched the right ones. I'm using Wamp server 2.0 that has Apache Version : 2.2.11 PHP Version : 5.3.0 MySQL Version : 5.1.36 phpmyadmin : 3.2.0.1

    Read the article

  • CLI package to replace Plesk

    - by dotancohen
    Myself and another programmer are tasked with maintaining a few webservers. I prefer CLI tools, she prefers Plesk. However, I am adamant about not installing Plesk for quite a few reasons. I have written a small Python script for adding new domains, and now I am about to add the ability to configure email addresses while abstracting the details of Postfix from her. Before I go that route, I have googled to see if anything already exists, and am surprised that I have come up with nothing! Are there any mature, stable "control panels" or "server admin" tools like Plesk, but which are accessed via the CLI over SSH? I am looking for the following features: Add / remove / configure domains served by Apache. Add / remove / configure email boxes and mail groups. Add / remove MySQL databases, users, and configure users to databases. Provide basic monitoring of "server health", that is: memory usage, disk usage, CPU usage, bandwidth usage. Possibly set up STFP accounts so that only specific FTP users could access specific /var/www/someSite/ directories. Note that I was unsure if this question is OT for ServerFault. As per the ServerFault about page (There seems to be no more FAQ) this question meets two of the "ask about" criterion and zero of the "don't ask about" with the possible exception of being opinion-based. Therefore, to keep on-topic, I would like to know about the available applications but we should be subjective and less opinionated. Thank you!

    Read the article

  • Blocking HTTPS and P2P Traffic

    - by Genboy
    I have a Debian server running at the gateway level on a LAN. This runs squid for creating block lists of websites - for eg. blocking social networking on the LAN. Also uses iptables. I am able to do a lot of things with squid & iptables, but a few things seem difficult to achieve. 1) If I block facebook through their http url, people can still access https://www.facebook.com because squid doesn't go through https traffic by default. However, if the users set the gateway IP address as proxy on their web browser, then https is also blocked. So I can do one thing - using iptables drop all outgoing 443 traffic, so that people are forced to set proxy on their browser in order to browse any HTTPS traffic. However, is there a better solution for this. 2) As the number of blocked urls increase in squid, I am planning to integrate squidguard. However, the good squidguard lists are not free for commercial use. Anyone knows of a good squidguard list which is free. 3) Block yahoo messenger, gtalk etc. There are so many ports on which these Instant Messenger softwares work. You need to drop lots of outgoing ports in iptables. However, new ports get added, so you have to keep adding them. And even if your list of ports is current, people can still use the web version of gtalk etc. 4) Blocking P2P. Haven't been able to figure out how to do this till now.

    Read the article

  • Apache in OS X not displaying localhost nor vhosts correctly

    - by Marcus
    I've encountered a really odd problem in my development environment, and I really can't make any sense of it. It started by a locally developed PHP-site refused to update any content I edited in a file – no text or nothing. So if the document was: <h2>Hello!</h2> and I edited it to <h2>What's wrong?!</h2> it still outputed <h2>Hello!</h2>. I thought is was some kind of cache:ing problem, but no "hard reloads" in the browser nor sudo apachectl -k restart sorted it out. Only a restart of my Mac did finally fix it. Now, a few days later even stranger issues are appearing. I have a LAMP-stack installed via Homebrew, in httpd-vhosts.conf I've set ~/Dev/ as my localhost, and I set up a <VirtualHost *80> for each project ("ServerName projectname.dev" for example). However, what ever files of folder I put in ~/Dev/ have stopped showing up on localhost, and new VirtualHost-directives doesn't work. Three projects + "docs" in the folder: But "localhost" only displays the two older projects...? So, as I've said – I've tried restarting Apache (without errors), clearing browser caches (tried in three browsers, Chrome, Safari and Firefox) and ever rebooting the Mac. Nothing. Any ideas? Running OS X 10.8.5 and Apache 2.2.24.

    Read the article

  • Windows 7 notebook turn off by itself, how to check if it is due to CPU being too hot?

    - by Jian Lin
    I have a Dell Studio 15 notebook, and it just started turning off by itself yesterday. Could it be that the CPU is too hot? I have had several notebooks before and every one of them I can put them on the bed without any problem. This Dell Studio Notebook, however, seems like have the air / fan outlet pointed outward from the bottom back of the notebook, so I suspect that the air is partially blocked when it is on the bed. Are there Win 7 tools that can monitor the CPU temperature, or will some 3rd party tool be needed? (I try to stick to official tools nowadays). Also, it is running Win 7 Ulitmate, there is actually no utility or background service from Win 7 or from Dell that detects when the temperature is too hot (or 95% near the max), pop out a message box giving a warning and say that the computer will go into sleep mode in 1 minute, but instead just turn off the computer by brute force (cutting out the power) right then and there? Update: it turned off right in front of my eyes -- it is not doing any windows update or anything. just normal use and jooooop, it turned off.

    Read the article

  • adding a custom user folder on Ubuntu

    - by Narcolapser
    Question: How do you add a custom folder to the collection of user folders that come with Ubuntu? Info: I just loaded my netbook with Ubuntu Desktop 10.04LTS (Desktop because it is an aspire one and the Apocalypse seems to follow when ever i try to install netbook remix onto it). It comes with standard folders like Documents, Music, Pictures, Downloads(though this one doesn't appear until you actually download something), Videos, etc etc. These are handly little folders because they have little symbols on them and are nicely located in my file browser. it is basically like the folder lay out the windows had in vista. I do a lot of little programing on this computer so i have a folder in which i keep all these single kb code files. Obviously named "Code" that I keep in my home folder. But I would really like to it over listed next to my other user folders. In summary, how do you add a folder to the listing on the file browser. And, if possible, how do you give it an icon? (I understand fully that I will probably have to make said Icon) those two things are what I'm seeking to do. ~n P.s. please correct me if I'm using the wrong name. I just guessed and called them "User Folders" because they were folders the user uses. made sense. but if they have another name like "libraries" please say so. Thanks

    Read the article

  • Unable to record using Jmeter

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. Launch jmeter 2.3.3, added thred group to test plan Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings Saved the settings Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved Now in the jmeter, started the http proxy server. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Configure IIS site to work with host header & hosts file entry

    - by HarveySaayman
    I'm I bit of an IIS / Web noob (I'm a C# backend service / winforms dev) so please bare with me :-) I've set up a site in IIS on my local dev machine. In the bindings section of the site ive added 4 bindings, all 4 for http: Host Name Port IP Address blog.sourcecube.co.za 26581 * www.blog.sourcecube.co.za 26581 * blog.sourcecube.co.za 26581 127.0.0.1 www.blog.sourcecube.co.za 26581 127.0.0.1 in my hosts file (drivers\etc\hosts), i've added the folling entries: 127.0.0.1 blog.sourcecube.co.za 127.0.0.1 www.blog.sourcecube.co.za when i ping my domain name from the command line it does in fact resolve to the loopback address, 127.0.0.1. So what I'm expecting to happen when i navigate to blog.sourcecube.co.za in my browser is for it to resolve to 127.0.0.1, and when the request hits IIS, it should know which site to serve because of the host header? But when i navigate to blog.sourcecube.co.za, i get an "Unable to connect, Firefox can't establish a connection to the server at blog.sourcecube.co.za" error. What am I doing wrong? --- UPDATE --- Navigating to blog.sourcecube.co.za:26581 from my browser works... I'd like get it working without specifying the port number though.

    Read the article

  • How do you make Google's interface always be in your chosen language?

    - by Michael Wolf
    Google's interface and search results don't always appear in my preferred language, English. I'm located in Mexico City and, although I generally have no problem with Spanish, I would prefer search results in English most of the time. (The exception is when I'm using search terms in Spanish.) I'd also prefer the interface to be in English, but that's far less important to me than search results. Google looks at your IP to decide where you're coming from and thus what language to present results in. So, when I type www.google.com into the URL bar, it redirects me to www.google.com.mx. Is there a way to force Google to use one language all the time? Here are some things I've done and tried: 0) I have a Google account, and I've configured it such that it should know that English is my preferred language. I don't often explicitly log out of Google, so generally Google knows I'm me and my preferences when I access its services. 1) I've configured my browser to ask for pages in English. Very few sites support this feature at all; Google isn't one of them. 2) From www.google.com.mx, I can click on "Google.com in English". This works until, I think, I close the browser. 2a) From www.google.com.mx, I can go into account configuration, which is English. From then on, everything's in English. 3) I can append &hl=en (Human Language = English) to the end of the URLs of result pages. 2, 2a, and 3 all "work", but they're all mildly annoying. I'd rather avoid them if I could. (At the risk of stating the obvious, English and Spanish are the languages I'm dealing with, but I imagine that, say, a francophone using Google from Korea would run into basically the same issue.)

    Read the article

  • Synchronising a remote folder with a local one.

    - by Workshop Alex
    I am using a network disk (that's connected to my router by USB) to store several data files. A simple .NET application that I've created is supposed to read and modify these data files. However, some security issues are preventing this application to access these files directly. (Actually, these have been built-in to my application on purpose since it's not going to support NAS disks.) Since this disk is shared with several computers, I just want to have a simple synchronisation method, which will copy the files to a local folder where3 my application can access them. And, once modified, it should send back the modified files to the NAS disk again. I have two options: 1) Build a second application to do my own synchronisation. 2) Find some build-in function inside Windows 7 Ultimate which can do this for me. Option 2 is preferred. Option 1 is something I can do easily, if need be. I don't need third-party tools. (Still, feel free to add some references to good tools, although I won't accept them as answers.) Basically, is this possible with Windows 7 and if so, how?

    Read the article

  • how to block spam email using Microsoft Outlook 2011 (Mac)?

    - by tim8691
    I'm using Microsoft Outlook 2011 for Mac and I'm getting so much spam I'm not sure how to control it. In the past, I always applied "Block Sender" and "Mark as Junk" to any spam email messages I received. This doesn't seem to be enough nowadays. Then I've started using Tools Rules to create rules based on subject, but the same spammer keeps changing subject lines, so this isn't working. I've been tracking the IP addresses they also seem to be changing with each email. Is there any key information I can use in the email to apply a rule to successfully place these spam emails in the junk folder? I'm using a "Low" level of junk email protection. The next higher level, "high", says it may eliminate valid emails, so I prefer not to use this option. There's maybe one or two spammers sending me emails, but the volume is very high now. I'm getting a variation of the following facebook email spam: Hi, Here's some activity you have missed. No matter how far away you are from friends and family, we can help you stay connected. Other people have asked to be your friend. Accept this invitation to see your previous friend requests Some variations on the subject line they've used include: Account Info Change Account Sender Mail Pending ticket notification Pending ticket status Support Center Support med center Pending Notification Reminder: Pending Notification How do people address this? Can it be done within Outlook or is it better to get a third party commercial software to plug-in or otherwise manage it? If so, why would the third party be better than Outlook's internal tools (e.g. what does it look for in the incoming email that Outlook doesn't look at)?

    Read the article

  • Ubuntu networking issue: two specific machines cannot browse web while connected to network at the same time.

    - by jensendarren
    I have setup a secure wireless network which works very well except for two laptops running Ubuntu 10.10 that can't access the Internet via a browser at the same time. They can both ping sites, wget sites, use skype but when using a browser the page never loads (in Firefox the status bar just sits there saying "Connecting" until it times out.) Here is what we have tried so far (nothing has fixed this issue): OpenDNS Restart networking services Using wired connection rather than wireless Removing all other nodes from the network except the two machines that have this issue Swapped out the router Factory reset the router Reformatted one of the machines and re-installed Ubuntu 10.10 Other things that we have checked: The two machines can connect simultaneously without any issues to other wireless networks in different locations (say in an Internet Cafe or another office) The two machines have unique IP addresses The two machines have unique MAC addresses The two machines can communicate on the network using Skype, wget, ping etc We are not using a proxy on either machine FYI: I have attached output from wireshark. For the test we turned both machines on and pointed them both to the same website. The content loaded on one and not the other. Here is the output from wireshark- (speedyshare.com/files/26228631/machine_output_1 && speedyshare.com/files/26228649/machine2). As you can see the first one worked, the second one didn't. I don't fully understand the output and would appreciate if someone could shed some light on what might be causing this and how we can fix it! Many thanks! Darren

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Oracle Error ORA-12560 TNS:Protocol Adapter error?

    - by David Basarab
    I am using Oracle Database 10g. Both Servers are Windows 2003. I have an Orcale Database set up on one server. Here is the TNSNames.ora from the server with the database. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL.VIRTUALHOLD.COM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) The Environmental Variables on the Server are ORACLE_HOME = C:\oracle\product\10.2.0\db_1 ORACLE_SID = orcl I am trying to connect to it from another box that has Oracle Client installed. Here is the tnsnames.ora installed on the other client server. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\client_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orcl) ) ) ORACLE_HOME = C:\oracle\product\10.2.0\client_1 ORACLE_SID = orcl Locally on the database server I can connect to through sqlplus with no issues. On the client machine I keep getting the error: ORA-12560: TNS:protocol adapter error What am I missing? Does the client TNSNames.ora need to be different?

    Read the article

  • Connect by Wifi to Sql Server from another computer

    - by Bronzato
    I try to connect by Wifi to Sql Server with Sql Server Management Studio from another computer but it failed. I have a computer with Windows Seven & Sql Server 2008 (lets say the server computer). Next to it, I have a fresh installed computer with Windows Seven & Sql Server Management Studio (let's say the client computer). What I do on the server computer: configure firewall by enabling port 1433 enabled network protocols (TCP/IP) inside Sql Server Configuration Manager checked "Allow remote connections to this server" on server properties in Sql Server Management. started Sql Server Browser restarted services (Sql Server Browser is stopped but I think it is not neccessary, isn't it?) Next, I successfully tested a ping on the port 1433 from my client computer with a tool named tcping (ex: tcping 192.168.1.4 1433). But I still cannot connect from my client computer to Sql Server on my other computer. Ok, something new on this problem: until now, I successfully connected to my "server computer" with Management Studio. What I do is typing the computer name in the server name field in the connection window of Management Studio. My previous (failed) attempt was to type the computer name followed by the instance of sql server (ex: COMPUTER_NAME\SQL2008). I don't know why I only have to type the computer name... Nevermind. Now my new challenge is to succeed connecting my VB6 application to this remote database located on my "computer server". I have a connection string for this but it failed to connect. Here is my connection string: "Provider=SQLOLEDB.1;Password=mypassword;User ID=sa;Initial Catalog=TPB;Data Source=THIERRY-HP\SQL2008" Any idea what's wrong? Thanks

    Read the article

  • Node.js Build failed: -> task failed (error#2)?

    - by Richard Hedges
    I'm trying to install Node.js on my CentOS server. I run ./configure and it runs perfectly fine. I then run the 'make' command and it produces the following: [5/38] libv8.a: deps/v8/SConstruct - out/Release/libv8.a /usr/local/bin/python "/root/node/tools/scons/scons.py" -j 1 -C "/root/node/out/Release/" -Y "/root/node/deps/v8" visibility=default mode=release arch=ia32 toolchain=gcc library=static snapshot=on scons: Reading SConscript files ... ImportError: No module named bz2: File "/root/node/deps/v8/SConstruct", line 37: import js2c, utils File "/root/node/deps/v8/tools/js2c.py", line 36: import bz2 Waf: Leaving directory `/root/node/out' Build failed: - task failed (err #2): {task: libv8.a SConstruct - libv8.a} make: * [program] Error 1 I've done some searching on Google but I can't seem to find anything to help. Most of what I've found is for Cygwin anyway, and I'm on CentOS 4.9. Like I said, the ./configure went through perfectly fine with no errors, so there's nothing there that I can see. EDIT I've got a little further. Now I just need to upgrade G++ to version 4 (or higher). I tried yum update gcc but no luck, so I tried yum install gcc44, which resulted in no luck either. Has anyone got any ideas as to how I can update G++?

    Read the article

  • batch copy files with error log on missing permissions

    - by sc911
    Hi *, I'm searching for a tool to batch-copy files, that should support the following points: copy files from a net-share report any errors show errors only or filter log on errors don't stop on an error also report if a file or a folder could not be copied due to missing permissions if possible it should have a queue where new job can be added while copying I tried the following tools: TerraCopy: takes a lot time to just calculate the time and the size of the job and does not report errors due to missing permissions (it doesn't even add those files to the copy-queue) Karne's replicator: does not report errors due to missing permissions xcopy: does a great job when using the right parameters and piping the output to a file (in the German localization xcopy /k /r /e /i /s /c /h SOURCE TARGET>LOGFILE 2>&1 will do the job. opening the logfile in IE will give you a great monitor). but quing jobs it not possible (ok, you can join them all in a batch-file, but you can not queue jobs while another one is running (hm, thinking of a batch-script that loops through a file with the source-target-config...)) to be continued Which tools do you use? Tell me! Thx sc911

    Read the article

  • Deploy our own software using Puppet?

    - by Ken
    (Apologies in advance for the stupidity in this question. I'm normally a programmer, not a sysadmin, but I've taken it upon myself to automate some things, and clean up some other things which are automated but not in the prettiest way. :-) I've been looking around at various tools for automation of software deployment to a bunch of servers, like cfengine, Puppet, and Chef. So far, Puppet looks the most appealing, but I've certainly not committed to anything yet. These tools all look like they can do a great job of keeping a bunch of servers up-to-date with prepackaged software. What I don't get is: how does one use a tool (like Puppet) to manage deployments of our own internal software? I think I'm at a loss because I've seen a thousand tutorials showing how to keep Apache ensure => latest (which is pretty cool), but nothing that quite corresponds to my use-case today, which is something more like: when a human being pushes The Button, pull branch A from the version-control repository B run command C to compile it copy the binaries D to servers E1 through E10 on each server, run command F to make all changes take effect Puppet sounds great, and I totally see the advantage of declarative, idempotent configuration over some shell scripts, but I've not seen any tutorials for "you want to update your shell scripts to Puppet (or Chef, or cfengine) so here's what you should do". Is there such a thing? Is it obvious to other people how to take the things provided in the Puppet docs and replicate the behavior I want? Am I just not getting it? What it's sounding like to me, so far, is that the human being (#1) would manually package the software (#2 and #3) external to Puppet, manually update the Puppet config, which would trigger Puppet to update the servers ... maybe? (I'm a little confused here, as I'm sure you can tell.) Thanks!

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • How to play individual albums in iTunes?

    - by Herb Caudill
    I know of two ways to play one specific album in iTunes: Do a search that's specific enough to include just that album and no other tracks; press a "Play album" button. (Doesn't work in cover flow or list view.) Go to list view; turn on column browser; in View/Column Browser, make sure "Albums" is showing; double-click an album name. These are fine as far as they go, but: Double-clicking an album in cover flow will play the album, and then keep going (in alphabetical order). That's no good. In playlists like "Purchased" or "Recently Added", you can either view and play whole albums, or sort by date added; you can't do both. In general, there's no straightforward way to get from a track in a playlist to the whole album it belongs to. What I would really, really like, would be to right-click on any song or album cover, anywhere, and choose "Play album". While I'm waiting for Apple to add that, any tips for simple album-centric listening?

    Read the article

  • Why can't I mount an image hosted on a read-only HFS+ partition via Boot Camp?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Why do disk images hosted on a read-only HFS+ partition behave differently?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Boot Camp driver package includes file system drivers that enable Windows to access the Mac OS HFS+ partition. It's read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Recommended programming language for linux server management and web ui integration.

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects except Landscape (not free,) Ebox (not entirely free, and more than I am looking for,) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo.) Thanks for any ideas!

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >