Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 977/1211 | < Previous Page | 973 974 975 976 977 978 979 980 981 982 983 984  | Next Page >

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • How do I get the machine name from an IP via Multicast DNS?

    - by Adam
    I have a list of IP addresses on a network, and most of them support multicast DNS. I'd like to be able to resolve the server name instead of just having the IP address. ping computer.local 64 bytes from 192.168.0.52: icmp_seq=1 ttl=64 time=5.510 ms 64 bytes from 192.168.0.52: icmp_seq=2 ttl=64 time=5.396 ms 64 bytes from 192.168.0.52: icmp_seq=3 ttl=64 time=5.273 ms Works, but I'd like to be able to determine that name from the IP. Also the devices don't necessarily broadcast any services, but definitely do support mDNS broadcast. So looking through services won't work.

    Read the article

  • Configure browser and VPN traffic

    - by Zachzor
    Hello everyone. I've been having a few issues with my company's VPN server. The VPN is running on a Mac Server (10.6.x) and I'm also using a Macbook (10.6.5). I've been building specific programs to gather information from IPs, and to work on this while I'm at home I need to go through our VPN to access the network. Unless I send all traffic over VPN, I'm not able to hit those specific IPs. However, I'm unable to access the internet through my web browser when I send all my traffic over VPN. I was wondering if there was a way (besides setting up a split tunnel) that I could set up a web browser to go through my current wireless connection, as opposed to going through the VPN like the rest of my applications. Wether the browser be Chrome, Firefox, or Safari doesn't matter to me. Anyone else run into this issue and find a clever way to solve it? Thank you!

    Read the article

  • Setting up a VPN tunnel between a Linux box and a Cisco FW

    - by Meni
    Hi. I have a linux box (ubuntu) and I have a service provider that will only allow an IPSEC tunnel connection between his network and my linux box. I have these details from the service provider: Service Provider: Peer IP – Lan on service provider's side - 10.10.10.10/24 Linux box details: Peer IP - Lan - Connection details: Phase1: Sha Aes 128 DH - group x Preshared – Lifetime – 24h Phase2: Sha Aes 128 Lifetime – 1h I am not sure which app I need to install on the linux box that will support this type of connection. Any ideas? Thanks!

    Read the article

  • How to download big file with chrome on Mac OSX?

    - by Eye of Hell
    If I try to download a big file on unstable connection/server (XCode 4) Google chrome simply "stops" downloading on first network error so I have a first 1-2-3 gigabytes of file and chrome thinks that download is finished. Unfortunately, I need to download an entire file, so I need a more advanced download tool like a wget. But there comes a problem: most URL's currently on the web is not a direct URL but multiple "redicrect" pages that utilize complex javascript in order to generate next url and redirect browser to it. Chrome handles such things ok, but if I try to supply such URL to wget it will download some "intermediate" page as a file - not a file itself but an HTML page with complex redirect javascript. is it any way to get a direct URL from chrome or to somehow discover it so I can use it with wget? Maybe it's some avanced download manager integrated in chrome that I just need to install? I use MacOS X 10.6.6 and latest Google chrome.

    Read the article

  • Remote Desktop Problem on Windows Server 2008 R2

    - by lukiffer
    Revised this question to be more concise, consolidating several revisions. Symptoms: From a domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server (by hostname or FQDN) = success Domain credentials to a member server (by IP) = fail Local credentials to a member server (by either) = success From a non-domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server = fail Local credentials to a member server = success (Identical behavior from a Mac RDC 2.1 client) Server Configuration Details: Windows 2008 R2 Datacenter w/ SP1 The domain in question is a subdomain of a Windows 2008 domain (forest root). Root has DCs in both Site A and Site B, subdomain only has DCs in Site B. RDP is operating normally on all root member-servers and DCs. No remote desktop settings are defined by GPOs. Network level authentication is enabled; all clients are compatible and the certificate exchange/SSL handshake completes successfully. Not catching any errors in netlogon log.

    Read the article

  • Lan DNS not working after reinstall of Ubuntu 13.10

    - by DrorCohen
    I upgrade my Ubuntu desktop to 13.10. When I say upgrade I mean installed on a new partition from scratch (old partition is available if To the problem: I'm trying to ping a host (Drobo-FS server) by it's host name. I get "Unknown Host". However pinging from another computer on the same lan - works fine (a laptop with 12.04 lts). for that matter every ping from the 13.10 to the local lan by hostname fails, ping with ip works. I don't have a local DNS server but somehow all the other computers in the network find each other by host name - only this new one fails... help appreciated...

    Read the article

  • Windows Internet Connection sharing with Mobile Broadband

    - by PaoloFCantoni
    Due to circumstances, I have only got mobile broadband where I am living. I have a small network with a ADSL Router (but which isn't connected to the Internet. I want to use ICS to allow one machine (with the MBB modem) to act as the Internet interface and allow other machines connected to the ADSL router (including a new Android tablet by WiFi) to use the single mobile broadband connection. I've a feeling that my configuration is not valid - as it stands, but I'm not sure. Can some kind soul lead me "by the nose" to getting this working? FWIW The mahcines are all running Windows 7 TIA, Paolo

    Read the article

  • Automatically make or update a copy real-time on another hard drive volume whenever files are saved to a particular folder

    - by mrblint
    Whenever I save or update a file to a particular designated folder on my C:\drive I would like to make or update a copy on my network-attached storage device, ideally saving the copy to the NAS as a version rather than overwriting a copy there, if possible. I have Windows 7 x64 Ultimate. Is there any feature built-in that can accomplish this? It has to be a real copy, not merely a pointer. I'm trying to achieve some redundancy for especially critical documents (in a variety of formats) that change frequently throughout the day. P.S. I am looking for folder-level granularity; I wouldn't want this to happen for every file on the C: volume.

    Read the article

  • Prevent Nautilus from displaying thumbnails on a specific mount

    - by Zakhar
    I have written a filesystem over Fuse to access a remote pseudo-NAS (the French "Freebox V6", I'll soon publish it as GPL3... when it's a little bit more polished!). The NAS is connected to a home ADSL, thus data comes down at the upload speed of ADSL, which is at best 1Mbps. My mount works fine (read-only at the moment), but Nautilus sees the mountpoint (and all sub-directories) as a "local" filesystem and tries to make thumbnails. As I have a directory full of images, this is quite horrible, because Nautilus then opens ALL the images to try to display the thumbnail. I could switch the Nautilus preferences to "Never" for thumbnails, but then I'll loose thumbnails on my "real" local filesystem. So the question is: with the preference "Only for local filesystem", how can I instruct Nautilus that my mountpoint is in fact NOT a local mount so that it will stop trying to draw thumbnails on that specific mount, but continue "thumbnailing" on mounts that are really local? Edit note: the same things happens if you use "standard worldwide" mounts such as sshfs, davfs,... as long as you mount over a relatively slow network (ADSL) and have images/movies on your mounted tree.

    Read the article

  • Reasonably Secure Alternative to Poptop PPTP Server for Ubuntu server and Windows clients?

    - by wag2639
    I have a poptp server running on a old Fedora server but I'm upgrading to an Ubuntu 10.04 server. I was wondering if there are any good, reasonable secure alternatives to poptop that in can install on our new Ubuntu server as a way to get VPN access from Windows clients (XP and 7) to get remote access into our Intranet. We only use the VPN to access files located inside the network; we do not need to use it as a proxy/gateway. I've looked into openVPN but it seemed way too complicated and I would prefer something built into Windows. A Windows 7 only solution is OK.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Oracle Fusion Middleware 11gR1 ?FAQ??

    - by Hiroyuki Yoshino
    ??????FAQ????????????????? Oracle Technology Network (OTN)???????·??????·???????? Q. "Generic", "x86", "x86-64", ????????????????????????????????? A. ?????????????? Generic ????????·???????????????? 32????64???·??????????JDK/JVM????????? JDK/JVM???????????????????? x86 Supported System Configurations??????????????????????????????32???·???????????? x86-64 Supported System Configurations??????????????????????????????64???·???????????? ?????????? ("SPARC"??) Supported System Configurations????????????????????????????????????WebCenter for AIX?Portal, Forms, Reports and Discoverer for HP-UX PA-RISC???????? Q. SOA 11gR1 (11.1.1.1.0)??????????? 11gR1 (11.1.1.1.0)????????????????? A. ??????·????11gR1???????????????????????????????Oracle Support???????????????? Q. Oracle Fusion Middleware 11gR1???????????????????? A. Oracle Fusion Middleware 11gR1 ????????????????? Supported System Configurations?????????????? Q. Oracle Fusion Middleware 11gR1?????????????????????????????? A. Oracle Fusion Middleware?Oracle?????? Oracle Virtual Machine ????????????Oracle??????????????????????????????????????????????????? ???????? 

    Read the article

  • SQL server could not connect: Lacked Sufficient Buffer Space...

    - by chumad
    I recently moved my app to a new server - the app is written in c# against the 3.5 framework. The hardware is faster but the OS is the same (Win Server 2003). No new software is running. On the prior hardware the app would run for months with no problems. Now, in this new install, I get the following error after about 3 days, and the only way to fix it is to reboot: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.) I have yet to find a service I can even shut down to make it work. Anyone had this before and know a solution?

    Read the article

  • Sharing Internet over Wireless Router

    - by Alandt
    I have a very strange question today - how do I share my dial up internet (yeah I know you are gonna say that is slow but broadband internet isn't available in my area), so dial up and 3G connection is all I got. I also have a Vodafone USB 3G modem that picks up 3G network, I am planning to use my Vodafone 3G modem in the day since I have free dial up internet from 7:00 pm in the night untill 7am the next morning. Some additional details: * My PC is running Windows XP Professional SP3 * I have a Sitecom Wireless Router 150N X1 WLR- 1000 I would appreciate it if anyone can provide me with a step-by-step guide! Thanks

    Read the article

  • Google Music Player doesn't work

    - by EricoPF
    I'm trying to log in on Google Music Player application but doesn't work. I get the message below: Login Failed Could not identify your computer. Learn More On Google Help it says that it doesn't run on virtual machines, which is not my case, but I have a virtualbox installed though, and it says some people get to work if they disable their bridge network. The thing is I don't have any bridge interface, even if I remove all virtualbox modules I still get this message. This is my ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:15374 errors:0 dropped:0 overruns:0 frame:0 TX packets:15374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1455889 (1.4 MB) TX bytes:1455889 (1.4 MB) wlan0 Link encap:Ethernet HWaddr 94:db:c9:b2:1b:d7 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::96db:c9ff:feb2:1bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:828467 errors:0 dropped:0 overruns:0 frame:0 TX packets:568040 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1086663025 (1.0 GB) TX bytes:72984931 (72.9 MB) Any ideias ? Thanks guys! Cheers.

    Read the article

  • Disable IPv6 on Debian VPS

    - by chris_l
    I have a Debian Lenny VPS, that's running virtualized by Parallels/Virtuozzo. Currently, the network interface doesn't have an IPv6 address - and that's good, because I don't have an ip6tables configuration. But I assume, that I could wake up one day, and ifconfig will show me an ipv6 address for the interface - because I have no control over the kernel or its modules - they're under the control of the hosting company. That would leave the server completely vulnerable to attacks from IPv6 addresses. What would be the best way to disable IPv6 (for the interface or maybe for the entire host)? Usually I would simply disable the kernel module, but that's not possible in this case.

    Read the article

  • How to make NFS mounts available while offline?

    - by lpanebr
    Problem: I work on a notebook and while at work I have access to many NFS mounted drives. When I get home they are obviously not available. Windows 7 solution: My business partner uses Windows 7 and maps the folders via samba. Windows 7 has a very nice feature that let's he make these folders available offline. So when when he connects to the work network the changes get synchronized! Question: Is there a way to mimic that in ubuntu? What I have now: Server to local sync: I have added rsync entries on my crontab to copy server folders => local folders every five minutes. When at work I used the NFS mapped folders and while outside work I use the local copies. When I get at work I manually run a script that syncs local folders => server folders. Problems with my setup: slow startup when not at work (I guess do to the fstab trying to map the server folders) no conflict checking/managing I have to remember to sync manually and be careful because of the different file locations recent files do not work between work and home

    Read the article

  • Working with Git on multiple machines

    - by Tesserex
    This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: Use git itself for sharing, each machine has its own repo and you have to fetch between them. You can work on either machine even if the other is offline. This by itself is pretty big I think. Use one repo that is shared over the network between machines. No need to do git pulls every time you switch machines, since your code is always up to date. Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?

    Read the article

  • X crashes and GNOME loses all its configuration

    - by Oli
    About every 3 days on my desktop (always on), X crashes, gdm restarts and it dumps me at a login screen. When I log in Gnome appears to have lost a lot of its settings: it plays sounds in weird places, UI elements look like they're from the 90s (GTK+ defaults) and it's generally pretty hideous. Note everything works fine. It's not like my profile doesn't exist because I can browse the internet fine (Firefox knows my bookmarks, history, passwords, etc) and my desktop is unscathed (apart from the icon theme). Manually restarting gdm doesn't fix this. I have to do a full reboot. Now I'm almost certain that this is a nvidia issue causing X to baulk (I've seen similarish threads on nvnews) and I'm happy with that (my fault for running their latest drivers all the time). What I'm concerned about is why Gnome looks so fugly. Is there anything I can do to force it to reload its settings without restarting the whole computer. Restarting is an issue for me as I run several daemons that other computers on the network depend upon. This is what I mean by ugly/fugly... Look at that scroll bar!

    Read the article

  • MSSQL 2008 is claiming the firewall is blocking ports even from local machine

    - by Mercurybullet
    I was just hoping to step through a couple queries to see how the temp tables are interacting and I'm getting this message. The windows firewall on this machine is currently blocking remote debugging. Remote debugging requires that the debugging be allowed to receive information from the network.Remote debugging also requires DCOM (TCP port 135) and IPSEC (UDP 4500/UDP500) be unblocked Even when I walked over to the actual machine and tried running the debugger, I'm still getting the same message. Am I missing something or does the debugger try to run remotely even from the local machine? Since this was meant to be just a quick check, I don't need instructions on how to open up the firewall, just hoping there is a way to run the debugger locally instead.

    Read the article

  • In VirtualBox, how can I access host localhost from guest (Visual Studio Dev Server from IE7 testing VM)?

    - by Seth
    Host OS is Win7 running MyApp in the Visual Studio Development Server, bound to localhost:51227, VM is VirtualBox configured with NAT. Guest OS is Win XP with IE7 installed. My goal is to debug MyApp (running on host) from within IE7 (running on guest). Visual Studio Development server only binds to the loopback network device (i.e. localhost). It does not bind to the external IP address of my host. I've tried access 10.0.2.2:51227 from IE7 on the guest (and confirmed that 10.0.2.2 is the gateway address using ipconfig), but it appears that 10.0.2.2 binds to the external IP of the Host, NOT the loopback IP (localhost), so this does not work. Any suggestions?

    Read the article

  • Ubuntu 12.04 Transparent proxy gateway

    - by user146536
    i have a ubuntu server which i want to use as a transparent proxy, (i have no issue setting up squid, just the iptables. The server only has one network interface. The server sits on the same subnet as the router which is the current gateway to the internet for clients, i want to simply set the gateway on the clients pointing at the transparent proxy which in turn forwards the requests to the router and off to the internet. See me diagram, can anybody offer to help with the iptables configuration to achieve this scenario? subnet mask /22 Router(10.4.12.1) Transparent Proxy (eth0, 10.4.12.2) | | +----+----+---------+----+----+ | | | | Comp1(10.4.12.6) Comp2(10.4.12.5) Comp3(10.4.12.4) Comp4(10.4.12.3) Thanks

    Read the article

  • Parallels: How to see a Mac-hosted website from Windows?

    - by Jim Miller
    I'm traveling at the moment, and have moved one of the websites I'm working on to my MBP so I can work on it without a network connection. I've made an addition to the Mac's /etc/hosts file pointing the domain name to 127.0.0.1, and all's well. I now want to get into Parallels and check the site from Windows browsers. How do I get things so that the Windows browser will understand the domain name and access the site? The Windows image obviously doesn't recognize / can't find the Mac's /etc/hosts file, and references to 127.0.0.1 in the Windows hosts file just as obviously point to Windows, not the Mac. Any advice out there? Thanks!

    Read the article

  • Security considerations in providing VPN access to non-company issued computers [migrated]

    - by DKNUCKLES
    There have been a few people at my office that have requested the installation of DropBox on their computers to synchronize files so they can work on them at home. I have always been wary about cloud computing, mainly because we are a Canadian company and enjoy the privacy and being outside the reach of the Patriot Act. The policy before I started was that employees with company issued notebooks could be issued a VPN account, and everyone else had to have a remote desktop connection. The theory behind this logic (as I understand it) was that we had the potential to lock down the notebooks whereas the employees home computers were outside of our grasp. We had no ability to ensure they weren't running as administrator all the time / were running AV so they were a higher risk at being infected with malware and could compromise network security. With the increase in people wanting DropBox I'm curious as to whether or not this policy is too restrictive and overly paranoid. Is it generally safe to provide VPN access to an employee without knowing what their computing environment looks like?

    Read the article

< Previous Page | 973 974 975 976 977 978 979 980 981 982 983 984  | Next Page >