Search Results

Search found 924 results on 37 pages for 'patrick olurotimi ige'.

Page 11/37 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Is having a [high-end] video card important on a server?

    - by Patrick
    My application is quite interactive application with lots of colors and drag-and-drop functionality, but no fancy 3D-stuff or animations or video, so I only used plain GDI (no GDI Plus, No DirectX). In the past my applications ran in desktops or laptops, and I suggested my customers to invest in a decent video card, with: a minimum resolution of 1280x1024 a minimum color depth of 24 pixels X Megabytes of memory on the video card Now my users are switching more and more to terminal servers, therefore my question: What is the importance of a video card on a terminal server? Is a video card needed anyway on the terminal server? If it is, is the resolution of the remote desktop client limited to the resolutions supported by the video card on the server? Can the choice of a video card in the server influence the performance of the applications running on the terminal server (but shown on a desktop PC)? If I start to make use of graphical libraries (like Qt) or things like DirectX, will this then have an influence on the choice of video card on the terminal server? Are calculations in that case 'offloaded' to the video card? Even on the terminal server? Thanks.

    Read the article

  • Automate the backup of my databases and files with cron

    - by Patrick
    hi, I want to automate the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" First of all, is this a good approach? It is not clear how to specify the repository and the working copy with svn? How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Prevent users from Router 2 seeing Router 1 computers

    - by Patrick Robert Shea O'Connor
    I've got 2 Netgear N300 (WNR2000v3) routers. Here's my setup: Modem Router 1 Private Users/Router 2 Public Wireless Users on "Guest" Network. I want to prevent users who are connected to Router 2's "Guest" network from accessing anything that is connected to Router 1. There is an option when setting up the "Guest" network called "Allow guest to access My Local Network" which I thought if unchecked would do this very thing; however, I can still access files and such of computers connected to Router 1. Router 1 assigns 192.0.0.x IP addresses, Router 2 assigns 10.0.0.x IP addresses, how can they even see each other? Do I need to change the subnet or something else?

    Read the article

  • mount dev, proc, sys in a chroot environment?

    - by Patrick
    I'm trying to create a Linux image with custom picked packages. I followed the guide here http://www.olpcnews.com/forum/index.php?topic=4766.0 However, when I tried to install some packages, it failed to configure due to missing the proc, sys, dev directories. So, I learned from other places that I need to "mount" the host proc, ... directories to my chroot environment. Though, I saw two syntax and am not sure which one to use. In host machine: mount --bind /proc <chroot dir>/proc and another syntax (in chroot envrionment): mount -t proc none /proc Which one should I use, and what are the difference? Edit: What I'm trying to do is to hand craft the packages I'm going to use on an XO laptop, because compiling packages takes really long time on the real XO hardware, if I can build all the packages I need and just flash the image to the XO, I can save time and space.

    Read the article

  • Exclude minify from CSF/LFD

    - by Patrick Lanfranco
    I have currently installed minify on on of my websites however I am currently getting hammered with email from CSF/LFD. Example: Time: Fri Aug 10 13:10:03 2012 +0700 File: /tmp/minify_builder,index.php_f516d1c7cae9c3881406fd9a0ce69c38 Reason: Script, file extension Owner: -:- (504:501) Action: No action taken What is the best way to have these ignored inside CSF? Some advice would be highyl appreciated. Thank you very much.

    Read the article

  • Is Winpcap able to capture all packets going through a Gigabit NIC without missing any packets?

    - by Patrick L
    I want to use Winpcap to capture all network packets going through a Gigabit NIC of a server. Assuming that I am able to utilize the network link up to 100%, the maximum network speed is 1000Mbps. If we exclude the TCP/IP headers, the maximum TCP data rate should be roughly 940Mbps. Let's say I send a 1GB file through the NIC at 940Mbps using TCP destination port 6000. I use Winpcap to capture all network packets going through the NIC and then dump it to a pcap file. If I use Wireshark to analyze the pcap file and then check the sum of packet size for all network packets sent to TCP port 6000, am I able to get exactly 1GB from the pcap file? Thanks.

    Read the article

  • MacBook Air Keeps dropping Wi-Fi

    - by Robert Patrick
    So my MacBook Air keeps dropping Wi-Fi for some reason. It happens ONLY on my home network, and ONLY to my computer. I'm using a Linksys WRT54G router. I'm the only Mac on the network. Every other Wi-Fi network is perfectly fine, and every other computer on this network is fine. Many things can happen. It could say it's connected, but not be able to access the internet (whether it tells me that there's no internet access or not). It may just drop Wi-Fi altogether, and refuse to connect. Generally, if I unplug the router and plug it back in, it's all good. It also works if I restart my computer. This happens multiple times a day. Yesterday I did everything I know to get it to connect (restart router many times, restart my MacBook), and nothing worked. Eventually it just magically worked. How can I stop this from happening? We got a notice from Comcast a while ago saying that a bot called DNS Changer was detected on one or more machines on the network. I'm assuming that this can't be me, right?

    Read the article

  • Exchange 2010 issuing NDRs to Hotmail/Live & few other domains on receipt of message

    - by John Patrick Dandison
    I'm working through a beast of an issue at the moment. Exchange 2010 single server on prem Hybrid deployment to Office 365 ESMTP filtering turned off on ASA Certain domains (most consistently, Hotmail/Live) cannot send us mail. At one point, we couldn't send out either, but I created a new Send Connector that forces HELO instead of EHLO. I turned on SMTP logging, an example of the failed inbound message connection is below. I've read that it could be that reverse DNS is the problem, i.e., the exchange banner smtp address needs to reverse-DNS back to the same IP. Since it's the default exchange connector, its banner is the server's name, but the DNS name of the MX record is different. I'm waiting for the PTR records to update to reflect the internal name as well. Is that the right direction? Is this all DNS or something different? SMTP Session Log (single failed session for illustration): SMTPSubmit SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders 220 ExchangeServerName.internalSubDomain.example.com Microsoft ESMTP MAIL Service ready at Mon, 15 Oct 2012 09:57:24 -0400 EHLO col0-omc3-s4.col0.hotmail.com 250-ExchangeServerName.internalSubDomain.example.com Hello [65.55.34.142] 250-SIZE 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-STARTTLS 250-X-ANONYMOUSTLS 250-AUTH NTLM LOGIN 250-X-EXPS GSSAPI NTLM 250-8BITMIME 250-BINARYMIME 250-CHUNKING 250-XEXCH50 250-XRDST 250 XSHADOW MAIL FROM:<[email protected]> 08CF5268DABBD9AA;2012-10-15T13:57:24.564Z;1 250 2.1.0 Sender OK RCPT TO:<[email protected]> 250 2.1.5 Recipient OK XXXX 1282 LAST Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXX from COL002-W38 ([65.55.34.135]) by col0-omc3-s4.col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command " XXXX 15 Oct 2012 06:57:24 -0700" Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXXXX <[email protected]> Tarpit for '0.00:00:05'

    Read the article

  • Powershell - how to set multiple action on get-aduser "dataset"

    - by Patrick Pellegrino
    I'm trying to run a script that modify password for multiple AD user accounts, enable the accounts and force a password change at next logon. I use this code but that's not work : Get-ADUSER -Filter * -SearchScope Subtree -SearchBase "OU=myou,OU=otherou,DC=mydc,DC=local" | Set-ADAccountPassword -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "NewPassord" -Force) | Enable-ADAccount | Set-ADUSER -ChangePasswordAtLogon $true If I run the Get-ADuser line with ONLY one of the other line that's run fine ex : Get-ADUSER -Filter * -SearchScope Subtree -SearchBase "OU=myou,OU=otherou,DC=mydc,DC=local" | Enable-ADAccount Where I'm wrong ? I'm new to PowerShell probably I'm misunderstanding something.

    Read the article

  • How to create a init.d script for openssh-server which was compiled and installed from source using configure + make + make install?

    - by Patrick L
    I have installed openssh-server in my Ubuntu PC using apt-get install openssh-server. The version is 5.9. Now, I would like to compile and install openssh-server version 6.2 from source codes. I have successfully downloaded the source codes, and run the following commands: ./configure make make install I found that the new version of openssh-server was installed into /usr/local/sbin/. The old version of openssh-server is in /usr/sbin/. I found that the service script in /etc/init.d/ssh is still pointing to /usr/sbin/. And the old openssh-server (v5.9) is still running. How can I replace the old openssh-server with the new openssh-server that I have just compiled and installed? How can I create a init.d script to start and stop the new openssh-server that I've compiled from source manually? How to start the new openssh-server on boot? When I install openssh-server using apt-get install, the config files will be installed into /etc/ssh/. If I compile and install it from source, where is the config file? If I compiled openssh-server from source, but I install openssh-client package using apt-get install, will there be any config files conflict? Thanks.

    Read the article

  • TCP Windows Size vs Socket Buffer Size on Windows

    - by Patrick L
    I am new to Windows networking. When people talk about TCP tuning on Windows platform, they always mention about TCP Window Size. I am wondering whether Windows uses the concept of "Socket Buffer Size"? On Windows XP, the TCP window size is fixed. We can set it using the TCPWindowSize registry value. How about Socket Buffer Size? How can we set Socket Buffer size on Windows? Can we set it to a value different from TCP window size?

    Read the article

  • Need Varnish configuration advise

    - by Patrick
    Hello fellows, I need some advise here for default.vcl. Here's the rules: 1) Only cache pages with urls that contains '/c/', the rest will pass 2) Set the cache expiry to 3 hours 3) Only cache and server from cache if cookie 'abc' and cookie 'xyz' is empty Thank you!

    Read the article

  • Permissions issues with mounting remote server into a specific folder

    - by Patrick
    I'm doing the following to mount a remote server to a specific path on my server: sshfs [email protected]:/backup/folder/ /home/myuser/server-backups/ However when I mount the server the folder permissions change (they become 700), and when I test my rsnapshot.conf file I get the following error: snapshot_root /home/myuser/server-backups/ - snapshot_root exists \ but is not readable What am I doing wrong ? should I mount the remote server with another user ?

    Read the article

  • Enable multiple audio output on Windows 7

    - by patrick
    For Windows 7, 64 bit: I have a digital SPDIF output to my stereo, which controls speakers in other rooms. I also have a set of speakers connected to the regular audio jack at the computer. This allows me to send music to the kitchen while my child plays games on the computer. Works great. Except when I'm playing games and still want to listen to music. ;-D I know I can manually switch WMP to play through the speakers instead of SPDIF, but I was wondering if there's any way to enable simultaneous audio out in Windows 7? Virtual Audio Card is a non-starter because I'm running 64 bits and the VAC driver isn't signed.

    Read the article

  • Apache: tmp is not writable

    - by Patrick
    hi, I've installed Drupal on a new webserver and I get the following errors: warning: is_writable() [function.is-writable]: open_basedir restriction in effect. File(/tmp) is not within the allowed path(s): (/customers/rollergirl.ch/rollergirl.ch:/var/www/diagnostics:/usr/share/php) in /customers/rollergirl.ch/rollergirl.ch/httpd.www/drupal/sites/all/modules/imagecache/imagecache.install on line 37. ImageCache Temp Directory /tmp is not writeable by the webserver. I guess this happen because the server is not configured with a writable tmp folder I don't have access to Apache configuration file (I only know for sure it is Apache). Could you suggest me what to do ? I can only contact web server service ? thanks

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • How to setup NTFS ACL with Acces Based Enumeration

    - by Patrick Pellegrino
    We're in the process of migrating from Novell Netware to Windows 2K8 R2 infrastructure (AD, File server, print server... etc) My question is about ACL. While Netware and Windows are totally different, I want to be sure my thnking is good before screwing everything up! There's a scenario : F: | +-- DATA <= Shared as DATA with Access based enumeration | +-- Folder 1 +-- Team 1's Folder +-- Team 2's Folder ... In that case, by default, rights are herited from the F: to the deepest folders. What we want : Administrators group have full control top - down. From DATA, ABE list only folders that users have access. (ex. : I'm in group Team 2, I see Team 2's Folder). From what I understand, at DATA I remove all NTFS ACL to be herited (ex. Users Group), be sure to keep Administrators Group and SYSTEM user. After that, grant Full control (or any right needed) on each folder to Groups or Users that have to have access. Does I'm wrong ? Anything I should take care of ? Any help to my understanding will be very appreciated. Regards.

    Read the article

  • Cookieless Domain redirect in WHM/cPANEL

    - by Patrick Lanfranco
    I am currently trying to get my head around in understanding how to set-up a "cookieless" domain using WHM / Cpanel - unfortunately without any success at this moment. I have a Magento store and I would like to use "cookieless domains" for my media, skin (template) and js files. Magento has a nice feature to define URL for those folders. My current setup is as follows: www.mydomain.com <- main store media.mydomain.com <- subdomain to the media folder (mydomain.com/media/) skin.mydomain.com <- subdomain to the media folder (mydomain.com/skin/) js.mydomain.com <- subdomain to the media folder (mydomain.com/js/) I think it's poinless to have them used as "cookieliess domains" since my Magento installation uses .mydomain.com as cookie domain, so what I would like to achieve is to register a new additional domain and have it point via WHM / cPanel to those specific locations. I have tried to change the A and CNAME records although without any success as they were just simply redirecting from one page to another in the browser (newdomain.com - jump to old.com). What kind of records do I have to set to have this working properly? Some advice would be highly appreciated.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >