Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 251/335 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • How to connect through a proxy using Remote Desktop?

    - by scottmarlowe
    So I've got a home server running Windows Server 2003. I use a dual network card setup and Routing and Remote Access to link the internal, private network to the external connection. The external connection hooks directly to my cable modem (so no routers or other devices sitting between). The problem I'm having is that I can't connect remotely from a location outside the house (so connecting to the server's external connection) to the server using either Remote Desktop or VNC. I have enabled both ports in Routing and Remote Access's firewall to allow access, and I have enabled Remote Desktop in Windows Server 2003. The odd thing is that I can access my home server's SVN repository and I can even ping the server's IP. I am using the IP to attempt to connect, though I use a dyndns.com provided name to connect to my SVN repository, so it shouldn't make a difference (I know the IP is getting resolved correctly). Any ideas on where to start diagnosing this one? I haven't seen anything in my server's event log. If any other info is needed, let me know. Thanks. UPDATE: One last piece of information: We use a proxy server at work, which I'm nearly 100% sure is the culprit. I have a workaround--if I connect to our VPN (even though I'm already inside the building) I am able to connect to my home server. This is with VNC. However, is there a way to connect through a proxy using Remote Desktop? ONE MORE UPDATE: Indeed, it was the http proxy I'm sitting behind at work that was causing the issue. An acceptable workaround is to use my VPN connection to bypass the proxy, and I'm in!

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • Mac OS X Client With Static DHCP Assignment Requests Wrong IP via Option 50

    - by Starchy
    I have a number of Mac (and a few Linux) laptops getting DHCP from a Force10 layer 3 switch, the only DHCP server on the subnet. There's a global dynamic pool, and for each full-time employee's laptop I have a single IP static pool set by MAC address. One and only one of the clients, running OS X 10.7.5, consistently fails to get a static assignment. The MAC address in the static pool definition has been carefully re-checked. Running tcpdump on a mirrored port when the laptop connects, I see that it is specifically requesting 10.100.0.252 (a dynamic address): 11:32:10.108280 IP (tos 0x0, ttl 255, id 28293, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > broadcasthost.bootps: [udp sum ok] BOOTP/DHCP, Request from 3c:07:54:xx:xx:xx (oui Unknown), length 300, xid 0x1399da89, Flags [none] (0x0000) Client-Ethernet-Address 3c:07:54:xx:xx:xx (oui Unknown) Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Request Parameter-Request Option 55, length 9: Subnet-Mask, Default-Gateway, Domain-Name-Server, Domain-Name Option 119, LDAP, Option 252, Netbios-Name-Server Netbios-Node MSZ Option 57, length 2: 1500 Client-ID Option 61, length 7: ether 3c:07:54:xx:xx:xx Requested-IP Option 50, length 4: 10.100.0.252 Lease-Time Option 51, length 4: 7776000 Hostname Option 12, length 10: "host-name" END Option 255, length 0 PAD Option 0, length 0, occurs 8 I haven't been able to find any extra system prefs or unusual software on the laptop. Disabling the interface and rebooting or temporarily setting the IP manually both fail to make any difference. Any suggestions appreciated.

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Hard wrapping in vim without joining

    - by Miles
    Vim newbie here. How can I hard wrap plain text in vim (inserting actual linebreaks), respecting word boundaries, without joining existing lines? For example, given this: Lorem ipsum dolor sit amet, consectetur adipiscing elit. - Nulla cursus accumsan faucibus. - Donec dapibus dignissim ullamcorper. Integer nec malesuada diam. I'd like to get (with textwidth=30): Lorem ipsum dolor sit amet, consectetur adipiscing elit. - Nulla cursus accumsan faucibus. - Donec dapibus dignissim ullamcorper. Integer nec malesuada diam. instead of this (which I can get with gggqG) Lorem ipsum dolor sit amet, consectetur adipiscing elit. - Nulla cursus accumsan faucibus. - Donec dapibus dignissim ullamcorper. Integer nec malesuada diam. Also, for bonus points: when I create a brand new buffer, I get different wrapping behavior (lines beginning with - aren't wrapped specially) than when I open a file ending in .txt. What controls this? I don't notice any difference in the output of :set filetype? or :filetype.

    Read the article

  • Does any economically-feasible publicly available software compare audio files to determine if they are dupes?

    - by drachenstern
    In the vein of this question http://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks is there any software that will automatically parse a library of my songs and find the ones that really are duplicates that one can be eliminated? Here's an example: My brother used to be a huge fan of remixing CDs. He would take all of his favorite tracks and put them on one. Then he would use my computer to read them in. So now I have like 6 copies of Californication on my HDD, and they're all a few bytes difference overall. I have hundreds of songs in my library like this. I want to trim them down to having uniques. They don't all have correct ID3 tags, so figuring out that Untitled(74).mp3 is the same as californication.mp3 is the same as whowrotethis.mp3 is tricky. I do NOT want to consider a concert album and a studio album rip to be the same (if I just did artist/title matching I would end up with this scenario, which doesn't work for me). I use Windows (pick your platform) and will be getting an OSX box later in the year. I'll run Linux if that's what it takes to get it organized. I have unprotected AAC and mp3 files. Bonus points for messing with WAV or MIDI and bonus points for converting from those into MP3 (I can always use Audacity and LAME to convert later if I know they match or to convert ahead of time if that will make things easier). Are there any suggestions, or do I need to goto Programmers or SO and build a list of requirements for comparing these things and write the software myself?

    Read the article

  • Why does my Internet slow to a crawl unless I reboot my router every few days?

    - by Lord Torgamus
    A few weeks ago, I noticed that my Internet connection had slowed down to a crawl. I waited a few days hoping it would go away on its own, but it didn't get better. So I asked this question about how to make it faster. The problem went away after I updated to the latest firmware, so I didn't follow up too carefully. But every few days since then, my Internet has slowed down again. Unlike before, all I have to do to fix it is open the router administration page and press the "Reboot" button. Nothing else seems to work, though I'm sure there are options I haven't tried. If it makes a difference, my girlfriend and I both transfer large amounts of data fairly routinely for school (videoconferencing, downloading entire recorded lectures). The router is a Cisco/Linksys 160N V3 that's about a year old. Most of the time, it deals with just two standard Windows 7 laptops. The only thing I came across while searching for answers/dupes was this question, which seems similar superficially, but probably doesn't have the same root issue. Anyways, it's not resolved. What could be causing these slowdowns, and how can I get rid of them?

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes.

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • Prevent Network Printers Automatically being added to 'Devices and Printers' in Windows 7

    - by Ben
    I think my question is a duplicate of this one, but the original question was never properly answered (the steps described are for Windows XP). I am aware of the option to "Turn off Network Discovery" (under Control Panel All Control Panel Items Network and Sharing Center Advanced sharing settings); I set this option (for both Home/Work and Private) but it doesn't seem to stop the printers getting added, and has the side effect of preventing me from browsing the list of machines on the network (which I need). I've tried the Windows XP registry option - but it doesn't seem to make any difference: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\NoNetCrawling I find it annoying having the printer list cluttered with printers from all over the office which I am never going to use (especially since lots of them no longer physically exist, users just haven't deleted them from their machines). This must be a real problem for people in massive offices with large numbers of printers - but I can't seem to find a lot of people complaining about it - which makes me think I'm missing something obvious. I don't really want to hack the firewall or turn off sharing completely, I still want to select and use network printers and file shares. Any ideas?

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN [migrated]

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • Read access to Active Directory property (uSNCreated)

    - by Tom Ligda
    I have an issue with read access to the uSNCreated property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNCreated property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNCreated property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNCreated property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • Win7 Aero disabled due to mirror drivers following TightVNC use, cannot re-enable

    - by me_and
    I recently installed TightVNC, and used it to access my desktop from my phone. When I did so, my desktop reported that Aero had been disabled due to an application being run that didn't support it. I now cannot re-enable Aero, despite having closed the TightVNC server (and indeed having rebooted a number of times since). Windows 7 troubleshooting reports the following: Close programs using mirror drivers To allow Aero effects to be displayed, close any programs that use mirror drivers (a type of display driver), such as Windows Remote Assistance and Windows Live Mesh. The troubleshooting system appears to attempt to automatically fix that, but fails. I can see nothing in the task manager that obviously uses mirror drivers; in particular, I can see nothing that looks like TightVNC in there. Starting and exiting TightVNC makes no difference. Google has turned up nothing that looks useful so far, either, although my Google-fu isn't great and I may just be searching for the wrong things. There's another question that reports the same problem, caused by LogMeIn rather than TightVNC, but the solution listed there doesn't work—TightVNC doesn't install LogMeIn mirror drivers, for obvious reasons.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • How can find the USB wireless adapter into the dmesg log file?

    - by AndreaNobili
    I am pretty new in Linux (RaspBian for RaspBerry Pi but I think that there are not difference) and I have to install an USB wireless network adapter (the product is the TP-Link TL-WN725N, this one: http://www.tp-link.it/products/details/?model=TL-WN725N ) Now, I think that this is not automatically recognized by my system because if I execute ifconfig command I obtain the following output: pi@raspberrypi ~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:2a:9f:b0 inet addr:192.168.1.8 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:475 errors:0 dropped:0 overruns:0 frame:0 TX packets:424 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:34195 (33.3 KiB) TX bytes:89578 (87.4 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) So now it see only my ethernet network interface and not the wireless. So I was thinkig to try to see into the dmesg, but I don't know what have I to see and how to select it into the dmesg output. For example by the following command I can see the line of the dmesg log file relate to my ethernet port: pi@raspberrypi ~ $ cat /var/log/dmesg |grep -i eth [ 3.177620] smsc95xx 1-1.1:1.0 eth0: register 'smsc95xx' at usb-bcm2708_usb-1.1, smsc95xx USB 2.0 Ethernet, b8:27:eb:2a:9f:b0 [ 18.030389] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup [ 19.642167] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0x45E1 But what can I try to search for the USB wireless adapter? Tnx

    Read the article

  • Different versions of iperf for windows give totally different results

    - by Albert Mata
    Measuring TCP output from a Windows client to Solaris server: WXP SP3 with iperf 1.7.0 -- returns an average around 90Mbit Same client, same server but iperf 2.0.5 for windows -- returns an average of 8.5 Mbit Similar discrepancies have been observed connecting to other servers (W2008, W2003) It's difficult to get to some conclusions when different versions of the same tool provide vastly different results. Example below: C:\tempiperf -v (from iperf.fr) iperf version 2.0.5 (08 Jul 2010) pthreads C:\tempiperf -c solaris10 Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte (default) [ 3] local 10.172.181.159 port 2124 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.2 sec 10.6 MBytes 8.74 Mbits/sec Abysmal perfomance, but now I test from the same host (Windows XP SP3 32bit and 100Mbit) to the same server (Solaris 10/sparc 64bit and 1Gbit running iperf 2.0.5 with default window of 48k) with the old iperf C:\temp1iperf -v iperf version 1.7.0 (13 Mar 2003) win32 threads C:\temp1iperf.exe -c solaris10 -w64k Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte [1208] local 10.172.181.159 port 2128 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [1208] 0.0-10.0 sec 112 MBytes 94.0 Mbits/sec So one iperf with a 64k window says 8.75Mbit and the old iperf with the same window size says 94.0Mbit. These results are constant through repeated tests. From my testing launching iperf(old) with window size "x" and iperf(new) with window size "x" instead of producing the same or very close results produce totally different results. The only difference I see is the old compiled as win32 threads vs. pthreads but parallelism (-P 10) appears to work in both. Anyone has a clue or can recommend a tool that gives results I can trust?? EDIT: Looking at traces from (old) iperf it sets the TCP Window Scale flag to 3 in the SYN packet, when I run the (new) iperf this is set to 0 in the initial packet. A quick analysis of the window size through the exchange shows the (old) iperf moving back and forth but mostly at 32k while the (new) iperf mostly keeps at 64k. Maybe it will help somebody to connect the dots.

    Read the article

  • Configuring port forwarding for SSH - no response outside LAN

    - by WinnieNicklaus
    I recently moved, and at the same time purchased a new router (Linksys E1200). Prior to the move, I had my old router set up to forward a port for SSH to servers on my LAN, and I was using DynDNS to manage the external IP address. Everything worked great. I moved and set up the new router (unfortunately, the old one is busted so I can't try things out with it), updated the DynDNS address, and attempted to restore my port forwarding settings. No joy. SSH connections time out, and pings go unanswered. But here's the weird part (i.e., key to the whole thing?): I can ping and SSH just fine from within this LAN. I'm not talking about the local 192.168.1.* addresses. I can actually SSH from a computer on my LAN to the DynDNS external address. It's only when the client is outside the LAN that connections are dropped. This surely suggests a particular point of failure, but I don't know enough to figure out what it is. I can't figure out why it would make a difference where the connections originate, unless there's a filter for "trusted" IP addresses, which is perhaps just restricted to my own. No settings have been touched on the servers, and I can't find any settings suggesting this on the router admin interface. I disabled the router's SPI firewall and "Filter anonymous traffic" setting to no avail. Has anyone heard of this behavior, and what can I do to get past it?

    Read the article

  • Why my hard disk can't boot from the BIOS?

    - by Mario
    I installed a new sata DVD burner. When I turned on the machine (windows 7) it didn't boot. It can boot from a lubuntu CD. There is an option on lubuntu to boot form the first hard disk. If I select it, the machine boots normally to windows 7. So from the CD I can boot but not from the BIOS. I checked all the options more than once: boot from HD, not boot from removable, boot from USB, boot from optical. The order of the boot sequence is HD then DVD. I tried booting only with the HD; I disconnected both DVDs. I even tried recovery of the MBR: bootsect, bootrec, fixmbr, buildbcd, nt60, etc. So, the question is, does this have a reason, what's the difference between booting from the BIOS (as I think) to from the DVD?. The BIOS is intel, it has BIOS codes on the right bottom corner, it stays at 5A for a while. 5A is "Resetting PATA/SATA bus and all devices".

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >