Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 977/1211 | < Previous Page | 973 974 975 976 977 978 979 980 981 982 983 984  | Next Page >

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Using DNS entries to determine location

    - by Raphink
    I'm trying to think of a clean way to determine the location of machines (mainly, which datacenter they belong to) based on their network settings. I would like it to be dynamic, and I'm thinking of using special DNS records that would be specific to the DNS server in each datacenter. For example, you could have: root@machine1# dig TXT mysite ... mysite 3600 IN TXT "DC1" ... root@machine2# dig TXT mysite ... mysite 3600 IN TXT "DC2" ... etc. I know that DNS has a special LOC record for location, but it takes coordinates, so it doesn't help in my case. Is there a standard way of addressing this issue, another special type of record for it, or some standard entries in TXT records?

    Read the article

  • Win 2008 Server configuration

    - by user123790
    Let me preface my question by saying I'm a novice in regards to server configuration. It's been 12+ years since I've attempted this. What we (our small office) are trying to achieve is to setup a Win 2008 server (located in a home) in a home network configuration (basic wireless router w/DHCP) that we (the office) can VPN to from our office. I have installed the software, installed DHCP, removed DHCP from the router, set the scope for 100 IPs and am now looking for information as to where I go from here? I believe I need to configure DNS and possibly set up static routes on the router for the home devices that need internet? The wireless clients are not receiving IPs is the current issue that I'd like to tackle. Also, would it be feasible to use the router's DHCP to assign IPs rather than having the server do it? If so, what would be the most direct way to accomplish this? I appreciate any help in this matter. Thanks

    Read the article

  • HP Virtual Connect and VLAN Tagging

    - by JaapL
    We have a c7000 chasis with the ability to have 8 uplinks per ESX host. Only 6 are currenlty active. I have a Virtual Switch with multiple vlan port groups and all the VMs are working fine. Recently we've been asked to setup network load balancing for one of our VMs, so we had our Virtual Connect engineer activate the last two uplinks. We then created a new vSwitch and added the two new uplinks to this vSwitch. We then moved the VM to this new vSwitch, but we get no connectivity. What could be the issue? We also added the appropriate VLAN ID. The VConnect engineer says everything is configured correctly and networking TEAM says the appropriate trunking is setup, so we are at a loss...

    Read the article

  • In VirtualBox, how can I access host localhost from guest (Visual Studio Dev Server from IE7 testing VM)?

    - by Seth
    Host OS is Win7 running MyApp in the Visual Studio Development Server, bound to localhost:51227, VM is VirtualBox configured with NAT. Guest OS is Win XP with IE7 installed. My goal is to debug MyApp (running on host) from within IE7 (running on guest). Visual Studio Development server only binds to the loopback network device (i.e. localhost). It does not bind to the external IP address of my host. I've tried access 10.0.2.2:51227 from IE7 on the guest (and confirmed that 10.0.2.2 is the gateway address using ipconfig), but it appears that 10.0.2.2 binds to the external IP of the Host, NOT the loopback IP (localhost), so this does not work. Any suggestions?

    Read the article

  • Remote Desktop Problem on Windows Server 2008 R2

    - by lukiffer
    Revised this question to be more concise, consolidating several revisions. Symptoms: From a domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server (by hostname or FQDN) = success Domain credentials to a member server (by IP) = fail Local credentials to a member server (by either) = success From a non-domain-member Windows 7 Client: Domain credentials to a domain controller = success Domain credentials to a member server = fail Local credentials to a member server = success (Identical behavior from a Mac RDC 2.1 client) Server Configuration Details: Windows 2008 R2 Datacenter w/ SP1 The domain in question is a subdomain of a Windows 2008 domain (forest root). Root has DCs in both Site A and Site B, subdomain only has DCs in Site B. RDP is operating normally on all root member-servers and DCs. No remote desktop settings are defined by GPOs. Network level authentication is enabled; all clients are compatible and the certificate exchange/SSL handshake completes successfully. Not catching any errors in netlogon log.

    Read the article

  • How to do proper Alpha in XNA?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • Adding Internal DNS server in Host file

    - by Param
    I have added Global DNS server ip address to one of my Desktop ( please see the Network configuration screenshot ). and after that i have added my both domain controller ip address in host file, and it is working fine. ( please see the below screen-shot for your reference ) Can you please guide, what problem can i face if i kept my configuration in this way. but i am wondering, can this setting can create a problem? because the computer will be able to reach corp.abc.com easily, with the help of host file.

    Read the article

  • How to make NFS mounts available while offline?

    - by lpanebr
    Problem: I work on a notebook and while at work I have access to many NFS mounted drives. When I get home they are obviously not available. Windows 7 solution: My business partner uses Windows 7 and maps the folders via samba. Windows 7 has a very nice feature that let's he make these folders available offline. So when when he connects to the work network the changes get synchronized! Question: Is there a way to mimic that in ubuntu? What I have now: Server to local sync: I have added rsync entries on my crontab to copy server folders => local folders every five minutes. When at work I used the NFS mapped folders and while outside work I use the local copies. When I get at work I manually run a script that syncs local folders => server folders. Problems with my setup: slow startup when not at work (I guess do to the fstab trying to map the server folders) no conflict checking/managing I have to remember to sync manually and be careful because of the different file locations recent files do not work between work and home

    Read the article

  • Putty freezes at random when logging into a remote machine in another continent

    - by artknish
    I have to ssh to a remote machine in Europe from Asia every day for my work. But Putty freezes sometimes at totally random times and I have no choice but to close and re-open a new ssh session. It's frustrating especially when I'm editing something or executing a long running program. I know the question really doesn't have much details ('cause nothing seems to be wrong with the network at all). Has anyone experienced this sort of issue with Putty and had resolved it? Thanks for your time!

    Read the article

  • Good speedtest results, but web pages don't load

    - by dmt0
    I have strange connection problems. Ping and download times are good - speedtest.net showed ping 65ms and download 2.17Mbps. Torrent is working well, giving me up to 300MBps. Webpages are loading very poorly though. They are timing out each time - I'd have to refresh 4-5 times to get any simple page to load. It has been happening consistently for the last few days. Same with different browsers on different machines (same network), Windows and Linux. There is no proxy in the browser. Is there any setting in Windows or in a browser that I can change to help this? Some background: I live on this island in Thailand, where internet connection is through radio to another island and than to mainland - it's very weather dependent, but generally OK. As I mentioned, ping is good. Any input is very appreciated.

    Read the article

  • Can't connect to IIS7 on one of my machines

    I have 2 computers, both running Win7 Professional x64. Computer "A" (192.168.0.10) is my work machine, it contains my tools etc. Computer "B" (192.168.0.15) is supposed to be my build server / web server for my projects I've installed IIS on "B", and installed IIS7 remote manager on "A". I'm trying to connect from A's IIS7 Manager to B's IIS but I fail. I have little knowledge of IIS, and I feel that's the main reason. I can ping B from A and get positive results so the machines do see each other. A and B are in the same workgroup but not in a domain (if that matters) - they're in my home network. What do I need to do to "see" B's IIS?

    Read the article

  • IOS not saving evaluate rule in access-list

    - by DeeJay1
    Hi. I have a basic firewall set up on an pretty od IOS in form of IPv6 access list exterior-in6 evaluate exterior-reflect sequence 1 permit ipv6 any host [my external address] sequence 10 permit tcp any host [my internal address] eq 22 sequence 11 permit icmp any any sequence 800 permit udp any any range 6881 6889 sequence 900 permit tcp any any range 6881 6889 sequence 901 deny ipv6 any any sequence 1000 IPv6 access list exterior-out6 permit ipv6 [my internal subnet] any reflect exterior-reflect sequence 10 Unfortunately the evaluate exterior-reflect sequence 1 line seems to get lost after each reboot, leaving my internal network without access. Any ideas?

    Read the article

  • How to configure postfix to dynamically choose different relayhosts?

    - by user24315
    I use my laptop at work on wireless and wired networks, at home on a wireless network, and at various other places (such as conferences, friends houses, etc). When at work I'd like postfix to use the corportate mail server to route emails. When at home I'd like it to use my personal mail server to route emails. When elsewhere I'd like to have the laptop attempt to deliver email in the normal smtp fashion. Is this possible using just postfix? Do I need something else (such as Lamson http://lamsonproject.org/, or scripts that dynamically patch my postfix configuration) when I want to do routing that depends on my current location?

    Read the article

  • Remote desktop connection over internet without port forwarding?

    - by hellbell.myopenid.com
    Hello, let's say that we have this situation. I want to remote desktop connection to my friend over the internet, but I don't have premission for port forwarding on the router, and my friend also can't configure his router. So the question is how to connect to computer without port forwarding, I know that is out there some programs like teamviewer, or some else that solve that task, but what I looking for is the some free site that can make "bridge" between are two computer, or is it possible to install on computer some program that simulate virtual router or something like this http://www.youtube.com/watch?v=SIof7kFTgJE .... I need this cause I have my own simple remote desktop connection program, but I can't connect to other computer outside network cause don't have premission to configure router :( any comment, link, advice, or tutorials will be very helpful :)

    Read the article

  • Prevent Nautilus from displaying thumbnails on a specific mount

    - by Zakhar
    I have written a filesystem over Fuse to access a remote pseudo-NAS (the French "Freebox V6", I'll soon publish it as GPL3... when it's a little bit more polished!). The NAS is connected to a home ADSL, thus data comes down at the upload speed of ADSL, which is at best 1Mbps. My mount works fine (read-only at the moment), but Nautilus sees the mountpoint (and all sub-directories) as a "local" filesystem and tries to make thumbnails. As I have a directory full of images, this is quite horrible, because Nautilus then opens ALL the images to try to display the thumbnail. I could switch the Nautilus preferences to "Never" for thumbnails, but then I'll loose thumbnails on my "real" local filesystem. So the question is: with the preference "Only for local filesystem", how can I instruct Nautilus that my mountpoint is in fact NOT a local mount so that it will stop trying to draw thumbnails on that specific mount, but continue "thumbnailing" on mounts that are really local? Edit note: the same things happens if you use "standard worldwide" mounts such as sshfs, davfs,... as long as you mount over a relatively slow network (ADSL) and have images/movies on your mounted tree.

    Read the article

  • Danger in running a proxy server? [closed]

    - by NessDan
    I currently have a home server that I'm using to learn more and more about servers. There's also the advantage of being able to run things like a Minecraft server (Yeah!). I recently installed and setup a proxy service known as Squid. The main reason was so that no matter where I was, I would be able to access sites without dealing with any network content filter (like at schools). I wanted to make this public but I had second thoughts on it. I thought last night that if people were using my proxy, couldn't they access illegal materials with it? What if someone used my proxy to download copyright material? Or launched an attack on another site via my proxy? What if someone actually looked up child pornography through the proxy? My question is, am I liable for what people use my proxy for? If someone does an illegal act and it leads to my proxy server, could I be held accountable for the actions done?

    Read the article

  • Conflicts with MS Office temporary files when using Offline Folders on Vista

    - by Tambet
    We are using Offline Folders feature of Windows Vista to make files on network shares available when out of office. Mostly it is working, but every time I do a sync I get a lot of such errors: D500E7B8.tmp - A file was deleted on this computer and changed on the server while this computer was offline. There are hundreds of them. I always select all of them and choose resolution "Delete from both locations". But what is causing this and how can I avoid it? I suspect the reason is that we are using Debian and Samba (3.4.7) on our file server. I've been looking for some Samba options that would cure this, but with no success. I learned that probably the cause is, that both Word and Excel are using specific pattern to change files - they never change the original file, but instead always write a new temporary file and rename it to original file, when you click Save. This is documented here: http://support.microsoft.com/kb/211632/?FR=1.

    Read the article

  • Google Music Player doesn't work

    - by EricoPF
    I'm trying to log in on Google Music Player application but doesn't work. I get the message below: Login Failed Could not identify your computer. Learn More On Google Help it says that it doesn't run on virtual machines, which is not my case, but I have a virtualbox installed though, and it says some people get to work if they disable their bridge network. The thing is I don't have any bridge interface, even if I remove all virtualbox modules I still get this message. This is my ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:15374 errors:0 dropped:0 overruns:0 frame:0 TX packets:15374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1455889 (1.4 MB) TX bytes:1455889 (1.4 MB) wlan0 Link encap:Ethernet HWaddr 94:db:c9:b2:1b:d7 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::96db:c9ff:feb2:1bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:828467 errors:0 dropped:0 overruns:0 frame:0 TX packets:568040 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1086663025 (1.0 GB) TX bytes:72984931 (72.9 MB) Any ideias ? Thanks guys! Cheers.

    Read the article

  • DNS record reappears after having been deleted

    - by palmbardier
    I've got a Microsoft Windows Server 2003 R2 acting as a domain controller for a small network. It provides DHCP and DNS among a few other services. It's only got a single NIC, but it's configured with two IP addresses. I want it's name to resolve to one of the two IP addresses assigned to its NIC. I've unchecked "Register this connection's addresses in DNS" under the "Advanced TCP/IP Settings". Currently we've got two distinct DNS Host (A) records for this domain controller: dc-001 - 10.0.0.1 dc-001 - 10.0.100.1 I've deleted the first entry but it continues to reappear in my dnsmgmt snap-in. Unfortunately I'm not a Microsoft systems administrator by trade. Does anyone familiar with Microsoft server environments know why a deleted Host (A) record would reappear? Is there another check-box I need to toggle? Thanks in advance to all of you Microsoft experts out there.

    Read the article

  • Ping reply not getting to LAN machines but getting in Linux router Gateway

    - by Kevin Parker
    I have configured Ubuntu 12.04 as Gateway machine.its having two interfaces eth0 with ip 192.168.122.39(Static) and eth1 connected to modem with ip address 192.168.2.3(through DHCP). ip-forwarding is enabled in router box. Client machine is configured as: ip address 192.168.122.5 and gateway 192.168.122.39 Client machines can ping router box(192.168.122.39).but when pinged 8.8.8.8 reply is not reaching Client machines but in the tcpdump output on gateway i can see echo request for 8.8.8.8 but never echo reply.Is this because of 122.5 not forwarding request to 2.0 network.Can u please help me in fixing this.

    Read the article

  • MSSQL 2008 is claiming the firewall is blocking ports even from local machine

    - by Mercurybullet
    I was just hoping to step through a couple queries to see how the temp tables are interacting and I'm getting this message. The windows firewall on this machine is currently blocking remote debugging. Remote debugging requires that the debugging be allowed to receive information from the network.Remote debugging also requires DCOM (TCP port 135) and IPSEC (UDP 4500/UDP500) be unblocked Even when I walked over to the actual machine and tried running the debugger, I'm still getting the same message. Am I missing something or does the debugger try to run remotely even from the local machine? Since this was meant to be just a quick check, I don't need instructions on how to open up the firewall, just hoping there is a way to run the debugger locally instead.

    Read the article

  • Syncing Large Directories/Filesystems using USB Drive [closed]

    - by Alan Lue
    Does anyone have a solution for syncing large directories/filesystems using just a USB flash drive (and specifically without using a network connection)? The objective is simply to sync a user directory between two computers. The contents of the user directory could amount to a large quantity of data—say, a quantity larger than could be stored on any single USB drive—but the aggregate size of changes that must be propagated by a single sync could easily fit on a USB drive. As an example, suppose a user directory is already synchronized between a desktop and a laptop computer. Here's a use case: Some changes are made in the user directory on the desktop. We mount a USB drive onto the desktop and copy whatever changes need to be applied to the laptop user directory in order to synchronize the desktop and laptop user directories. We now mount the USB drive onto the laptop and apply the changes. The desktop and laptop user directories are now synchronized. Any ideas? Alan

    Read the article

< Previous Page | 973 974 975 976 977 978 979 980 981 982 983 984  | Next Page >