Search Results

Search found 16628 results on 666 pages for 'setup kit'.

Page 415/666 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Proper way to do texture mapping in modern OpenGL?

    - by RubyKing
    I'm trying to do texture mapping using OpenGL 3.3 and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here. My texcords are in a vertex array. I have my fragment color set to the texture values and texel values. I have my vertex shader sending the texture cords to texture cordinates to be used in the fragment shader. I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. Here is my code: Fragment shader #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void) { gl_FragColor = texture(texture, texture_coord); } Vertex shader #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit Here is my vertex array with texture coordinates: GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y If you need to see all the code, here is a link to every file. Thank you for your help.

    Read the article

  • Distributed filesystem across a slow link

    - by Jeff Ferland
    I have an image in my head where a link is too slow to realize the real-time transfer of files, but fast enough to catch up every day. What I'd like to see is a master <- master setup where when I write a file to Server A, the metadata will transfer to Server B immediately and the file will transfer at idle or immediately when Server B's client tries to read the file before Server A has sent it. It seems that there are many filesystems which can perform well over fast links, but I don't know of any that do well with a big bottle neck and a few hours of latency.

    Read the article

  • RDP - Sharing shortcuts and/or toolbars

    - by Joe
    I often have to work across several virtual machines through RDP. I used to work with Terminals, and recently changed to mRemote NG. As of now, I have a checklist that I run on each new VM I create, in order to populate the desktop with the shortcuts and apps that I use regularly. Then, I create a checkpoint and use that when I need to revert to a "clean" machine. However, it's not always practical, and the VMs I have to use are not always created by me so that checkpoint is not always available. I know that I could use a template when creating the VM, but it doesn't solve the problem when I have to use VMs that I do not own. Does anyone know of a way to setup one set of shortcuts/apps and be able to launch them on a remote desktop connection easily? Kind of like a toolbar that is present wherever I'm logged on...

    Read the article

  • X11 forwarding from one server to other

    - by n3oblit7
    I have a setup where I need to forward X11 from my local machine (laptop) to a Virtual machine. The server hosting this VM cannot be reached directly from my laptop. I need to first login to a gateway and from this gateway, I can access the VM. How can I forward X11 from my laptop to this VM? I have tried following but these do not work: [laptop # ] ssh -X [gateway] [gateway # ] ssh -X [VM] [laptop # ] ssh -tX [gateway] ssh -X [VM] I could forward X11 only till the gateway. (DISPLAY variable gets set on gateway)

    Read the article

  • Forward e-mail to multiple addresses with conditions

    - by Valera Leontyev
    I need to forward e-mails to different mail accounts by different conditions. The aim is to create mail notification scheme for my company. I'd like to setup server on dedicated mail domain for it. Is there any software that helps to get my aim (Linux)? Examples: 1) forward all e-mail sent to [email protected] to x@x, y@y, z@z (no conditions) 2) forward e-mail sent to [email protected] where subject contains '[finance]' to a@b and b@b 3) forward e-mail sent to [email protected] where subject contains '[fault]' to s@s and s2@s. Receivers' domains are different. P.S. Now we use Gmail filters to get this functionality, but it's unstable and hard to maintain.

    Read the article

  • Remote Desktop connection breaks the wireless connection

    - by Michal Talaga
    When I connect to my Windows 8 Professional machine via Remote Desktop (RDP) I almost always get my WiFi connection broken. The setup: T61P Lenovo Laptop + W8 at home Wireless Router with NAT forwards connections to that machine Windows 7 laptop at work connecting to the home laptop When I connect, very often I get to login and suddenly connection is lost and I cannot reconnect again. When I get home I find my WiFi connection is still connected to the Access Point but does not function. Can't even ping the router. What is strange is that disabling wireless with the hardware switch and enabling it again doesn't help. The only way to make it works again: - Reboot - Disable wifi with hardware switch AND disable the network card in Device Manager, then enable both I did not have this problem on the very same laptop when it was running Windows 7. Any hint how can I find where the problem is?

    Read the article

  • With Google DFP (Small Business) is it possible to disable AdSense in an Ad Slot on a per-request basis?

    - by Daniel Pehrson
    Setup: I run a network of websites that target different hobby niches and have a section dedicated to community classifieds. I serve advertising on these sites through Google DFP for Small Business with AdSense enabled on the slots. Problem: One of the next sites in my network will be targeting the firearms/shooting industry and as such the classifieds section will not comply with the prohibited content guidelines of AdSense regarding the sale (or coordination of sale) of weapons. I work very hard to comply with the guidelines of my partners even if I don't understand/agree with them and after talking with many people have decided that the best option is to disable AdSense serving on that section of that website, while leaving it on for the rest of the network. Solution: Right now my only idea for this is to duplicate all my site's ad slots and tack a "_sensitive" onto the end of each one (eg. header and header_sensitive) conditionally registering ad slots based on whether or not I am in the sensitive section of the sensitive site. My hope however is that there may be a way to accomplish this without duplicating all my ad slots possibly with some sort of options to the GA_googleFillSlot() call that allows me to say "load ads from this slot but do not serve AdSense no matter what."

    Read the article

  • BIND9 DNS Records not Propagating

    - by natediggs
    Kind of new to managing DNS via BIND. We have a setup with a master server and a slave server. I've updated the zone file on the primary name server for our domain but the changes aren't propagating over to the secondary server. The funny thing is that I'm making a change in the zone file for a different domain on the server and those changes ARE being propagated to the secondary server. Anyway I can force the change to be made? Also, there was a third nameserver that used to be operational but has been offline for a few months now. I removed it from the zone file for the two domains that have it listed as a name server and it still (over 24 hours later) shows up from time to time when I run a record check. Any help on this would be greatly appreciated. Nate

    Read the article

  • how to congest a link using iperf

    - by navaz
    I have setup like below. Switch1-------------------- Switch2 | | | | | | | | | | | | PC1 PC2 PC3 PC4 I have a video traffic is flowing between PC1 and PC4. I have configured PC2 as iperf server. ( iperf -s ) and PC3 as client . (iperf -c 10.10.10.2 -P 20 -t 10000) where 10.10.10.2 is PC2 IP. now I am seeing most of the traffic in a link (switch1---switch2) is iperf. (TCP). I have observed from the logs that 1 out of 300 packet is UDP. Still I am not seeing any difference in the quality of video streaming in PC4. It looks similar compared to the case with no iperf. I am checking QOS, I have tried many options with iperf, couldnt succeed. I want to diminish the quality of video streaming in PC4. Could you please tell me what options can be used along with iperf to do it. Bandwidth between Switch1---switch2 is 1Gbits/sec. Thanks in advance

    Read the article

  • New Servers Active Directory and Exchange

    - by user3164638
    I have 3 Dell PowerEdge server, each with 2 quad-core processors. I am going to bring this office out of the stone-age network (P2P, share files on a flash drive, emails through Google, etc) and set up Active Directory and Exchange 2013. Our needs are not that great at the moment - our staff consists of approximately 40 people, and our network may eventually be managed by an external company. We need only one domain for our emails (though we may serve emails for a few other partners domains as well). I was thinking of setting something up like this: Server 1: Primary DC. Active Directory and Exchange on separate virtual machines. Server 2: Redundant of server 1. Server 3: Shared resources, storage, backups, etc. How would you utilize 3 servers for an Active Directory / Exchange setup for a small/medium office? We do have plans to grow, so my solution must be scalable, though I'm not sure that I want to split permissions, though I'd consider it if that was something that could be changed on down the road.

    Read the article

  • Setting up proxy to handle subdomain requests

    - by PeeHaa
    I have setup a proxy for a site which works with the following nginx config: server { listen 80; server_name proxy.example.com; access_log /dev/null; error_log /dev/null; location / { proxy_pass http://thepiratebay.se; proxy_set_header X-Real-IP $remote_addr; } } However on this there are also styles loaded from a subdomain (static.thepiratebay.se) which aren't going through my proxy, because it links to the original domain. Is there a way to also let those requests go to my proxy? Do I have to change the contents of the pages when serving it to let them also go through my proxy? If so: how? :) Or is there another (perhaps better) way?

    Read the article

  • Store and Encrypt data over the internet.

    - by sotsec
    I am trying to build a system where I will be able to access my files remotely. I want to setup an external hard drive or a NAS that I will access over the internet, and I want every file that is stored on that system to be encrypted. Could you please suggest me what is the best way of doing that? Or if you have any knowledge, what is the best way to access your files remotely with maximum safety? but the same time the space that the files are allocated is protected against theft(encryption) etc. thank you

    Read the article

  • Nagios service active only when other service is failing

    - by Laimoncijus
    Is is possible to define service to be active only the times while other service is failing? Consider following example: 2 hosts available: HostA (primary) and HostB (backup). Nagios service, which monitors amount of active connections to the host: gives OK when amount of connections to host 0 gives FAILURE when amount of connections to host = 0 If setup nagios service to monitor both hosts: HostA and HostB - it will give me OK for HostA (while it is primary and all connections normally goes to it) and FAIL for HostB (while it is backup and will receive no connections while HostA is alive). Can I make the nagios service for HostB somehow depend on sevice of HostA and give no failures (or maybe be inactive) up to the moment the service of HostA starts failing?

    Read the article

  • Squid randomly stops serving requests. How can I resolve this issue?

    - by Vijay
    The squid (2.7) proxy that I have running on ubuntu 8.10 stops accepting new requests after being online for a while, due to reasons that I can't discover. However doing a squid -k reload resolves the problem immediately. Now I manually run this command by monitoring the log and if i don't see any activity for 5 minutes I reload the config. Now on my quest for a solution I had several ideas: diagnose the root cause and eliminate it setup a script to automatically reload script if no new entries in access.log for the past 3 minutes painstakingly upgrade server to newer ubuntu version while keeping network offline or during off hours to minimize downtime. so i thought I would turn to you for solutions to option 2), as I do not understand squid enough for 1), and I'm avoiding 3) as long as i can. so can ideas?

    Read the article

  • Resizing 2 partitions (NTFS and ReiserFS3)

    - by steven
    When creating a Win7 and Gentoo setup I miss allocated the space needed for Windows and Linux. I have a 320 gb drive and created a 40gb partition on Win7 and used the rest of the space on Linux. Now I need about 70gbs on the NTFS partition. Are there any tools that will shrink the ReiserFS3 partition? (It is using about 80gbs and has the reset free), while growing the NTFS partition? If I have to clone, does the tool copy freespace inside the image? I would prefer this not happen as that I'm sort on backup space. [I can handle a 100-150gb of images, but I can't copy the entire HD]

    Read the article

  • Should we regularly schedule mysqlcheck (or databsae optimization)

    - by scatteredbomb
    We run a forum with some 2 million posts and I've noticed that if left untouched the overhead in the mySQL (as listed in phpMyAdmin) can get quite large (hundreds of megabytes). I'm wondering if scheduling a normal mysqlcheck to optimize the tables is good practice? Any reason not to do it, say, once a week at an off-peak hour? There was a time over the summer where our site was constantly crashing because mysql was using up all resources. That's when I noticed the huge amount of overhead and optimized the database and haven't had any problems since then with stability. I figured if that was helping alleviate the issues, I should just setup a cron to automatically do this.

    Read the article

  • DFS Replication, Users HOME folder - seems not to catch all files... any hints?

    - by TomTom
    I amm moving stuff out of a file Server. I am using DFS for that - the Folders are anyway in a DFS tree, so I can set up a replication temporarily, then drop the old Folder. Works nice, EXCEPT for the Folder containing the users home drives. Which, incidentally, is also the one I can not see all files in due to my permissions. Small Setup. We have 159mb in the users directories, 1280 files, 133 Folders original. The copy only has 157mb, 1269 files, 133 Folders. Anyone knwos of a way to find out what files are missing? IS this a Problem (could be some Caching files that are regenerated). Users are all offline (weekend) ;) This is pretty much the last share - all others had exactly ZERO issues.

    Read the article

  • A record not resolving

    - by user1561108
    I have a hosted domain at siteground. On this domain & host I have a subdomain with a wordpress install. I wish to move this blog to another host (hostgator), while keeping the domain with siteground. To do this I created a hosting account at hostgator, got it's ip address and set the A record in siteground's cpanel accordingly: subdomain.example.com 14400 A (ip of hostgator account) Going by this online traceroute tool the records appear to have been updated (over 4 hours ago now) as it now resolves to a theplanet.com server location which hostgator use yet the subdomain is still not resolving from a web browser. The account at hostgator has been setup and is navigable via ip address/~accountname. What's going wrong here? I should add the relevant DNS record at hostgator side looks like this: subdomain.example.com 86400 IN SOA ns483.websitewelcome.com. subdomain.example.com 86400 IN NS ns1.siteground145.com. subdomain.example.com 86400 IN NS ns2.siteground145.com. subdomain.example.com 14400 IN A 74.54.176.3 I'm not sure if the hostgator record should be classed as the SOA record but I don't know enough about it to be sure. Is this the source of the problem?

    Read the article

  • Network Performance issue

    - by qubemarker
    We have three Ubuntu 10.04 servers. One server is a storage server and the other two servers are configured as clients. The storage server has a good amount of capacity and it is integrated with windows Active directory server for Authentication. I am uploading some video files from both clients to the server and when I am uploading data from any one client alone I get about 26 MB/s data transfer rate. When I upload data from both the clients simultaneously I am only getting about 8 MB/s from each client. I have gigabit ethernet cards in all of the servers and a L2 Managed gigabit switch for connectivity. I don’t know why the data transfer rate is decreasing so much in simultaneous read and write. I have tried all of the TCP stack related settings suggested here. Can any assist with getting better read/write performance out of this setup? Any help is appreciated.

    Read the article

  • Connecting via url to a server that is on the same network as me

    - by Axehead
    Good day, I'm having problems with the ftp server I've just set up. I've already managed to configure my modem and wireless router to open up port 21 and setup ftp in my server. But It seems that when I try to connect to it when I'm in the same network as the server via URL, (ftp://mydomain.com) it redirects me to the modem's web interface. But then when I try to connect to it outside the network using a different internet connection. It succeeds. It also succeeds when I'm in the same network and I go to ftp://192.168.., the server's local IP. Am I supposed to adjust something in the modem or router? Or is this a different problem altogether? BTW I'm using Windows Server 2008 r2 as my server's OS and using IIS for ftp.

    Read the article

  • Mirroring of Apps across servers

    - by user1038814
    We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like: App is installed on one server along with mysql database App is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistency MySQL is installed with a Master-Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup server We have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.

    Read the article

  • Customising Windows 8 Start Screen Tiles

    - by Joe Taylor
    We are looking for an effective way to manage the start screen in Windows 8. So far using WSIM we can add certain start tiles by using the OOBE System - shell setup - SquareTiles and WideTiles properties. However this only seems to work for square tiles and not wide tiles, if anyone has any insight on this it would eb appreciated. However the main question is has anyone managed to modify this screen using a GPO, we can add application shortcuts to the Start menu list on the All Apps page using a create shortcut to all users start menu policy. However as we occasionally deploy apps throughout the year in line with the courses requirements we would want to be able to put a shortcut on the home screen. Is it possible?

    Read the article

  • How to use OpenVPN through a restrictive firewall?

    - by R.L. Stine
    I'm currently in the situation of attempting to setup OpenVPN on a personal VPS, for connection primarily through an overly restrictive firewall. All of the setups mentioned below work when used through a reasonably-firewalled connection. I have tried: OpenVPN running on the standard port OpenVPN running on port 443 (I start OpenVPN manually from the command line on the VPS and see that the server reports the connection being closed almost immediately, I assume this is a result of DPI on the firewall) STunnel running on port 443 to access OpenVPN and evade DPI. This is the most successful and allows a connection and internet access through the VPN for ~10-20 seconds, before the connection is forcibly closed. Is there anything else I can attempt?

    Read the article

  • two shops network

    - by edward
    okay so, I just opened up two shops in my hometown. The two stores is about 6 blocks apart, connecting them by wire is not really feasible in cost wise. What kind of network topology should I use for my small shops, there will be 5 computers, one is the sales computer ,another 4 as mentioned is the guest computer. I want the sales and guest computers network to be seperated. Both shops have same computers. The guest computers serves up simple website that has my shop catalog on it, I'm thinking of using a web server. So, how am i suppose to setup these networks, im planning to add in more computers in the future. Is it I need to station a single server at a shop, and all the computer connected to it? or is there any more effective methods? I'm no networking expert, would love to hear some advice.

    Read the article

  • Can't resolve Mac's machine name on VPN

    - by Raghuveer
    My mac'c machine name is something like this: hostname.company.com but whenever I connect to VPN, it becomes something like vpn-xxxx.company.com where xxxx are some random numbers. Because of this, some of my scripts which are dependent on host name gets blocked. We use the standard mac's vpn setup which comes with OS X Lion (under network preferences). How can I resolve to the correct mac's name even if I am on vpn ? That is even if I am connected to VPN, my machine name should resolve to hostname.company.com and NOT to vpn-xxxx.company.com. Any suggestions would be really appreciated.

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >