Search Results

Search found 15651 results on 627 pages for 'setup'.

Page 327/627 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Sharepoint 2010 and Samba LDAP groups

    - by Jon Rhoades
    The setup: Windows 2008 SP2 Sharepoint 2010 Foundation Samba 3 "Domain" I'm trying to use the Samba LDAP users & groups we already have to access to Sharepoint. I can successfully authenticate using the Samaba accounts (getting the "Error: Access Denied" message as the user has no permissions). So Sharepoint can clearly see and use the existing accounts/groups. What I can't do is be authorised as in the grant permissions interface, Sharepoint now fails to match the account (I get an "No Exact match found..."). Is there a way of getting the Sharepoint permissions interface to recognise and use our existing Samba LDAP accounts? I get it - don't use Samaba, use AD. If I had that option I would, but I don't.

    Read the article

  • Automatic TRIM vs. manual TRIM

    - by Eike Cochu
    I am currently trying to find out how to trim with my new TP and was wondering about the difference of manual/online trimming. Here is my setup: ThinkPad T430s with SSD Samsung 830, 128GB and Xubuntu 12.10, here are some outputs to check if trim will work on my system (got these from here: http://wiki.ubuntuusers.de/SSD/TRIM) root@eike-tp:~# sudo hdparm -I /dev/sda | grep -i TRIM * Data Set Management TRIM supported (limit 8 blocks) First, I tried the online trimming: How to enable TRIM? my fstab with discard inserted: UUID=d6c49c17-a4f1-466c-9f7e-896c20db3bba / ext4 discard,noatime,errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=a0322f5f-c6c1-4896-863f-668f0638d8cf none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 I tried to test if it works (but I don't get any zeroes when I try it with /dev/sda), but found out that this method is only possible with SSD type 2 and I seem to have type 3. So I don't know if it works or not. The Ubuntuwiki (first link) recommends manual trimming, so I set up a daily cronjob instead of discard: #!/bin/sh LOG=/var/log/batched_discard.log echo "*** $(date -R) ***" >> $LOG fstrim -v / >> $LOG the wiki article suggests weekly or daily. Now to my questions: How often executes the automated trim? How often is recommended? Online vs. manual trimming? Thank you for your help

    Read the article

  • Logrotate, is this a proper config for what I want to do?

    - by Felthragar
    I started using logrotate a few days ago on a new server setup (actually three of them). My config is as follows. /var/www/mywebsite.com/logs/*.log { rotate 14 daily dateext compress delaycompress sharedscripts postrotate /usr/sbin/apache2ctl graceful > /dev/null endscript } Problem is that this is putting several days of logs into the same file. For example, I've currently got a file called access.log-20121005 which has logs for Oct 3rd, Oct 4th and Oct 5th in it. Is that proper behaviour? What I want for it to do is to create one logfile for each day and keep 14 days of logs. Any help appreciated, thanks.

    Read the article

  • Connecting FreeNAS 8 to Mac OS X Lion LDAP Server

    - by Absolution
    I currently have Mac OS X Lion Server running from a MacMini and want to use it purely as an LDAP server for authentication for FreeNAS 8. I have FreeNAS setup and running on a VM, all features working correctly and as expected however I cannot connect to my LDAP server (MacMini). Error message; **Nss_ldap: could not search LDAP server – server is unavailable** For LDAP service settings in FreeNAS, I know my Hostname and Base DN are correct (exact copies of what I set originally and ones that are shown in server:open directory overview) however I am unsure what to enter for Root bind DN, password and Suffix’s. I have researched into where I can find these out and other than following the FreeNAS examples it appears there is a way to find out within the Server Workgroup Manager specific to my settings – however this function is unavailable to me and cannot be ‘ticked’ to view for some strange reason. Some forums explain how Root bind DN should be uid=admin, dc=… and others cn=admin, dc=… – I’m rather confused and would appreciate your help or advice with this.

    Read the article

  • Slow performance over network (Ubuntu)

    - by Filipe Santos
    i did setup this NodeJs TCP Server and tested it with a message flooder. Just to see how the performance of the server is. While the message throughput is great if i run the server and the message flooder on the same computer (ubuntu), the throughput dramaticaly decreases if i start the server on computer1(ubuntu1) and the message flooder on computer2(also ubuntu). Both PC are on the same network. In fact, they are directly connected to each other. I started searching the internet for reasons and i suppose i need to tune TCP on both Ubuntu-pcs but until now i haven't been successfull at all. Has anyone experienced such problems, or could someone help me out? Thanks Here the flooding code: var net=require('net') var client = net.createConnection(5000, "10.0.0.2") client.addListener("connect", function(){ for(var i = 0; i < 1000; i++) { client.write("message "); } })

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Ubuntu Wireless card not found

    - by user32121
    Hello all, I'm trying to setup Ubuntu on an old PC. I ran the following command, lspci, in order to get info on the wireless card installed. 00:0a.0 Ethernet controller: Marvell Technology Group Ltd. 88w8335 [Libertas] 802.11b/g Wireless (rev 03) Does anyone know where I can get drivers for this card/chip? And also, how to install (I'm rather new to linux). Also, I'm sure the card is working because I just formatted over xp, and everything was working well. Thanks in advanced!

    Read the article

  • Set up SSL/HTTPS in zend application via .htaccess

    - by davykiash
    I have been battling with .htaccess rules to get my SSL setup working right for the past few days.I get a requested URL not found error whenever I try access any requests that does not do through the index controller. For example this URL would work fine if I enter the it manually https://www.example.com/index.php/auth/register However my application has been built in such a way that the url should be this https://www.example.com/auth/register and that gives the requested URL not found error My other URLs such as https://www.example.com/index/faq https://www.example.com/index/blog https://www.example.com/index/terms work just fine. What rule do I need to write in my htaccess to get the URL https://www.example.com/auth/register working? My htaccess file looks like this RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] I posted an almost similar question in stackoverflow

    Read the article

  • Monitoring bandwidth/latency/jitter between 2 sites?

    - by TheCleaner
    I have 2 sites connected via an MPLS network and I'd like to do the following: setup a host on each end that can "talk" back and forth between each other and somehow report/log what kind of throughput, jitter, latency, etc. they are experiencing between each other in 5 minute intervals. Something similar to Qcheck but that can be automated. Bottom line is I'm trying to determine if the WAN network is "stable" throughout the day or if something is wrong. We have video conferences between these sites and even at 1024kbps calls we are experiencing delays and jitter. I'm hoping to exonerate the network with some testing.

    Read the article

  • Coldfusion multiserver instance hangs

    - by David Sedeño
    I have a coldfusion 8 multiserver setup with IIS in Windows 2008 Standard SP2 and when one instance "hangs" (I can't connect to the instance from fusion reactor) the web server throws a "503 service unavailable". The remains instance seems to works ok in fusion reactor but the website have only the 503. I have to restart jvm processes and IIS to get the website work again. The jvm processes have the option -Xmx2048m and the instanaces have 2.5Gb allocated. Maybe the jvm process reach the 2Gb limit and stop working? Can be a problem between IIS and CF instances? I'm new to CF debugging process, how can I find why the instance hangs? Thanks

    Read the article

  • Hyperic HQ says the server is down, but it is not!

    - by Diego Jancic
    Hi, I've been using HQ for a couple months now, and everything worked fine. But since yesterday all resources go down for a couple hours, and then everything restores to normal, and then go down again without doing anything. The server of course is working, the HQ server and agent are both working, and the IPs were not modified. I've tried to re-run the setup in the HQ agent, and it did not change anything. Agent is in Windows 2008, and Server is in Windows 2003. I'm using HQ Version 4.1.2 (build #1053 - May 06, 2009 - Release Build) Any hint? Thanks! Update: I guess (although I'm not sure) it stopped working when the disk on the server went full, with 0 bytes of free space. Of course I've freed more than 15gbs and restarted the HQ server/database.

    Read the article

  • Org-mode lags in highlighting source

    - by quanticle
    I'm using org-mode to maintain my programming notes. This means I have lots of source code blocks, as follows. #+begin_src <language name> <code> #+end_src One thing I've noticed is that when I write the #+end_src, emacs doesn't color the source code as such. Yet, if I quit emacs and reopen the notes file (or force a refresh with the Org-Refresh/Reload-Refresh setup current buffer menu entry) the source is colored grey if I'm using the GUI or green if I'm using emacs in the terminal. Is this an inherent limitation of emacs, or am I doing something wrong in setting up my code blocks that's preventing emacs from going back and recoloring the source code that I've entered?

    Read the article

  • can't access SAMBA shares on UBUNTU-server from other computers

    - by larand
    Installed UBUNTU-server 12.04 and configured /etc/samba/smb.conf as: #======================= Global Settings ======================= [global] workgroup = HEMMA server string = %h server (Samba, Ubuntu) security = user wins support = yes dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = no passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ############ Misc ############ usershare allow guests = yes #======================= Share Definitions ======================= [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no [Bilder original] comment = Original bilder path = /mnt/bilder/org browseable = yes read only = no guest ok = no create mask = 0755 [Bilder publika] comment = Bilder för allmän visning path = /mnt/bilder/public browseable = yes read only = yes guest ok = yes [Musik] comment = Musik path = /mnt/music/public browseable = yes read only = yes guest ok = yes I have a network setup around a 4G router "HUAWEI B593" where some computers are connected by WIFI and others by LAN. The server is connected by LAN. On one computer running windows XP I can see the server but are not allowed to acces them. On another computer on the WIFI-net running win7 I cannot see the server at all but I can ping the server and I can see the smb-protocoll is running when sniffing with wireshark. I don't primarily want to use passwords, computers on the lan and wifi should be able to connect without any login-procedure. I'm sure my config is not sufficient but have hard to understand how I should do. Theres a lot of descriptions on the net but most is old and none have been of any help. I'm also confused by the fact that I can not se the sever on my win7-machine even though it communicates with the samba-server. Would be very happy if anyone could spread some light over this mess.

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Setting up DNS in WHM/cPanel

    - by Jon Furmanski
    I don't understand what I'm doing wrong, but I'm sure this is a simple fix. I setup WHM/cPanel for the first time on my VPS and understand how DNS works for the most part (or so I thought). I created under the main domain name 2 nameservers (ns1.maindomain.com & ns2.maindomain.com). I have 2 IP address for my sever so each one points to a unique IP: ns1.maindomain.com => 198.x.x.204 ns2.maindomain.com => 198.x.x.205 I also set up reverse DNS with my hosting provider. When I put in my two nameservers under another domain (secondary domain), GoDaddy states that the nameservers are invalid. Any ideas on why this is or any configurations in cPanel that need to be made?

    Read the article

  • nVidia Driver - Laptop is forced as one of my monitors

    - by vaccano
    I am trying to get my nVidia Driver to correctly configure my multi-monitor setup. I have my laptop in a docking station and two monitors hooked up to it. I had an old driver that this worked correctly with. However, that driver was causing a lot of "Deferred Procedure Calls" so I upgraded to a newer driver. But now I am forced to use my Laptop monitor as one of my monitors. Here is the image in the nVida Control Panel: As you can see, both monitors are recognized, but the only options available are to use one of them with the Laptop Display. Any Ideas? I am running Windows XP (latest updates), I have an nVidia Quadro 1500M. I have tried several different driver versions and all the new ones have this issue.

    Read the article

  • What data to send when tracking clicks with Google Analytics events (and how)?

    - by user359650
    When tracking clicks on links, there are 3 items I'm interested in: link location in the page by grabbing the id of the closest parent: to see influence of location on click-through link text: to see influence of text on click-through link href attribute value: to see where people go when leaving my website The problem when using Google Analytics to track those clicks is that events only have 3 available text fields, one of which being the category, which if you use to store one of the above items will create a mess in your Event reporting because you will have as many categories as item values. Therefore if you assign a predefined value to the category (e.g. clicks), then you're left with only 2 event fields (action, label) to store 3 items (location, text, href). That in itself isn't the end of the world because you can concatenate 2 items into 1 event field, then use the reporting or the API to filter things out. Accordingly what I plan on doing is this: category: clicks action: {location_on_page} ¦ {text} label: {href} where {__} are variable values related to the clicked links With this I can easily create some reports directly via the GUI: downloads: include only events where label ends with .pdf click outs to particular domains: include only events where label contains domain And for more complex tasks I need to export the data (or use the API): influence of location on clicks: for each location in the design, count number of events that have that location in the action, then corroborate with pageviews of the corresponding pages. Whilst this looks good I'm wondering if there is a better approach, hence the following questions: Q1: Can you foresee any particular issues with this particular setup (e.g. things I won't be able to report on)? Q2: Can you think of other data that would be interesting to include in the event?

    Read the article

  • NginX & PHP-FPM, random 502.

    - by pestaa
    2010/09/19 14:52:07 [error] 1419#0: *10220 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [...], server: [...], request: "POST /[...] HTTP/1.1", upstream: "fastcgi://unix:/server/php-fpm.sock:", host: "[...]", referrer: "[...]" This is the error I'm receiving randomly. 95% of the time my setup works perfectly, but once in a while I'm getting 502 for 3-4 subsequent requests. I'm using Unix socket between the server and the PHP process as you can see, also have set up FastCGI params (SCRIPT_FILENAME), etc. correctly. What can I do about it to strengthen the connection between these services? Thank you very much in advance.

    Read the article

  • I am trying to set up a ubuntu sever 12.04 on my machine

    - by Jseb
    I am trying to set up a server on my home network which will eventually host rails. I am not great in linux server and i try to follow the prompt. I did succesfully get to a black screen which then prompts me to a username then password to then do anything ( assuming). However here what i try to do I kinda fellow his tutorial http://www.ubuntugeek.com/step-by-step-ubuntu-11-04-natty-lamp-server-setup.html but however the command where not 100% like him not in same order but same idea. Then i want to install ubuntu server with gui here the command i try with sudo apt-get upgrade sudo apt-get install ubuntu-desktop Which however give me the following error Err http... inRelease w Failed to fetch ht... So been ignored if i try the desktop one i get E: unable to locate package ubuntu E: unable to locate package desktop So i am assuming i am not connected to the internet, so i try the following command sudo vi /etc/network/interfaces here the output it gives me and i know my gateway on my laptop is 192.168.1.1 address: 192.168.1.148 netmask: 255.255.255.0 network: 192.168.1.0 broadcasts: 192.168.1.255 gateway: 192.168.1.1 Btw i do not know the command to get out of vi and saving it. Err http://us.archive.ubuntu.com precises InRelease Err http://us.archive.ubuntu.com precises-updates InRelease Err http://us.archive.ubuntu.com precises-backports InRelease Reading package lists... Done W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-updates/InRelease W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/precise-backport/InRelease

    Read the article

  • Need help with exploring a USB drive

    - by Bob Getsla
    When I plug in a USB drive, I see it on the left hand edge of the Unity desktop (11.10, 64 bit) but when I try to explore it, VLC starts and tries to play whatever it can find in the USB drive. This behavior began when I updated from 11.04 to 11.10. I literally cannot look into the contents of any of the USB drives I have, because I cannot stop VLC, nor can I do anything when I click Open other than watch VLC start up. This is very frustrating because it makes my USB sticks essentially useless. HELP! I'm sure that there is something a wizard could do about this, but I am not a wizard, and I am at my wits ends. Trying to get to the System Settings menu works, and I can see the setup for "Removable" devices, and they are all set to "Ask" but that is clearly not what is happening. So it looks like I must reach for the command line, but where do I go to find the settings for what the desktop does when I plug in a USB drive and wish to explore the file structure in it and possible copy a file into the USB or from the USB drive. Right now, VLC media player is always getting in my way. :-(

    Read the article

  • Browser with its own hosts file?

    - by Mystere Man
    I have a number of staging and test servers that I need to constantly modify my hosts file to access (they depend on the domain name, so i have to change the hosts file to get them to work). I find this annoying. I'd like to setup a portable browser of some kind for each kind of site i want to work with. Is there any version of any graphical web browser (including browsers based on the rendering engines of other browsers) that will do this? This way i can simply launch the instance that's already configured to work with staging if i want to test staging. Any ideas?

    Read the article

  • CentOS - dual boot from new partition

    - by Dima
    I need to install two copies of the CentOS 5.5 (bank A and bank B) on different partitions of the same hard disk and install grub boot loader to another partition (visible from both banks). The boot loader should redirect the boot menu to bank A or bank B (according to the configuration). The new partition is mounted to /common_partition and grub is installed on it using following command: grub-install /dev/hda In the new partition I'm created the following menu.lst file: title BOOTCONTROL REDIRECT : PLEASE WAIT root (hd0,1) configfile /boot/menu.lst boot On my setup: both partitions (bank A and bank B) are primary and grub is installed on MBR. The problem is: but the new boot loader (on common_partition) did not load. What wrong on my configuration?

    Read the article

  • Routing a url to fetch content from another site

    - by Abhishek
    Environment: IIS 7. I have a default site www.domain.com and its folder is C:Inetpub/wwwroot/domain There is subdomain www.subdomain.domain.com and its folder is C:Inetpub/wwwroot/domain/subdomain. Now I have setup a new website at an external server. I cannot put the content on the above server due to some reasons. I need the URL www.subdomain.domain.com/blog fetch content from this external server while the URL should remain the same. How could this be achieved in IIS 7?

    Read the article

  • Swapping 3Ware Raid Card - Best Procedure

    - by Brian Lee Jackson
    We have a server running an old firmware version and driver (just took this position) on the 3Ware 9650SE Raid Controller... We have been having issues with the server and seemed to have narrowed it down to the Raid Card... I will be replacing the Raid card with the same model of 3Ware 9650SE, however, the card ordered most likely will have newer firmware on it. I managed to backup all the data to a very large drive. My plan is to update the firmware/driver on the current setup (which is still booting) and verify that everything works.. Then throw in the new Raid Card, check the firmware version (not letting it post). And update to newest firmware if needed via Java management utility on the card? Is this the best route? Thanks!

    Read the article

  • Strange performance from RAID5 using WD RE4 disks

    - by Howard
    I've noticed a bit of a performance issue with some WD RE4 drives I'm using under AMD's hardware RAID solution. First a bit of background: Environment: Windows 7 home premium x64 HDD's: 3x 1TB WD Raid Edition 4 in a RAID 5 setup with 128 kbyte stripe (2TB usable space) Testing Tool: HD Tune, process set to "High Priority" Processor: AMD Phenom II x6 1100T Ram: 16GB DDR3/1600mhz Motherboard: MSI 970A-G45 The image below pretty much depicts the issue I'm having. Every test has the same thing, a period of similar length where the performance drops to a few megabytes a second. This can't be a TLER issue as the purpose of RE4's is to work around that. Any help would be greatly appreciated.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >