Search Results

Search found 23762 results on 951 pages for 'network speed'.

Page 502/951 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Can't change pivot table's Access data source - bug in Excel 2000 SP3?

    - by Ron West
    I have a set of Excel 2000 SP3 worksheets that have Pivot Tables that get data from an Access 2000 SP3 database created by a contractor who left our company. Unfortunately, he did all his work on his private area on the company (Novell) network and now that he has left us, the drive spec has been deleted and is invalid. We were able to get the database files restored to our network area by our IT Service Desk people, but we now have to re-link everything to point to our group area instead of the now-nonexistent private area. If I follow the advice given elsewhere on this site (open wizard, click 'Back' to get to 'Step 2 of 3', click 'Get Data...' I get a message that the old filespec is an invalid path and I need to check that the path name is invalid and that I am connected to the server on which the file resides. I then click on OK and get a Login dialog with a 'Database...' button on the right. I click this and get a 'Select Database' dialog which allows me to choose the appropriate database in its correct new location. I then click OK, which takes me back to the 'Login' screen. I can confirm that it has accepted my new location by clicking on 'Database...' as before and the NEW location is still shown. So far so good - but if I then click on OK I get two unhelpful messages - first I get one saying that Excel 'Could not use '|'; file already in use.' - although no other files are in use. Clicking on OK takes me back to the 'Login' dialog. Clicking OK again gives me the same message as before telling me that the OLD filespec is invalid (as if I hadn't changed anything) - but clicking on the 'Database...' button shows that the correct (NEW) database location is still selected. Can anyone tell me a way of using VBA to change the link information without having to spend hours fighting the PivotTable Wizard - preferably similar to this way you update an Access Tabledef:- db.TableDefs(strLinkName).Connect = strNewLink db.TableDefs(strLinkName).RefreshLink Thanks!

    Read the article

  • Computer POST's and draws the BIOS screens very slowly - Motherboard issue?

    - by Ssvarc
    I have a desktop that refuses to boot into Windows. I used Hitachi DFT and the HD came back OK. I then used Memtest86+ and it took hours for the test to run. After 8+ hours it was up to test number 6. I aborted and ran Memtest86. It ran at basically the same speed. I aborted and went to look at the BIOS settings. The computer is running slow at POST. It takes a long time for the keyboard to be recognized, etc. The BIOS settings takes time to be (slowly) drawn on the screen. What could be causing such behavior? EDIT: I gave back the computer a while back without ever discovering the cause so I'm closing the question.

    Read the article

  • Connect two subnets without router

    - by Shcheklein
    I got two Comcast routers with two different subnets on each. Every subnet contains 5 static IPs. Two questions: Are there any problems if both routers and machines from both subnets are connected into one switch? Security issues doesn't matter there. I need to know if there are some performance or other problems. Is it possible to make machines from different subnets to see each other if they all are connected into one switch? Some static routing, add ARP records or somethig else ... I just want to avoid configuring second ethernet adaptors, third router or something. And I need to connect these subnets vai high-speed local network.

    Read the article

  • Scripts on UNC paths take very long to run

    - by Álvaro G. Vicario
    I have several scripts in UNC paths (from Windows batch files to PHP scripts). No matter how I run them (double click on explorer, my editor's run command menu or Windows command prompt) they take really long to start running (like 14 seconds). Once they get started they run normally. This doesn't happen if I run them from mapped drives. I'm using Windows XP Professional SP3 inside an Active Directory domain and files are hosted in a Windows Server box (not sure about the version, it's an HP dedicated file server with bundled OS). Why does it happen? Is there a way to speed up things while using UNC paths?

    Read the article

  • Which is better for running Ubuntu and other Linux OSes, Chromebook or Windows, why? [on hold]

    - by Serge
    I'm learning programming and I would like to switch to a Linux OS, perhaps Ubuntu, to continue this, but the current machine is generally getting pretty old and slow and Windows is the least favorite option for production, and I can manage getting something new right around the price range of the nicest Chromebook on the market right now. However, I have compared specs of HP Chromebook 14 with those of regular PC laptops that roughly cost the same, and the latter consistently have approximately the same and sometimes higher (like the processor speed) specs. Yet usage of Chromebooks for this purpose is pretty widespread nowadays. Is this because they were initially built for a Linux OS - and is it really THAT crucial - or are there other major factors at play here?

    Read the article

  • mod_cache serving the wrong content

    - by J. Pablo Fernández
    I'm trying to use mod_disk_cache to speed up a web site that is running on WordPress. Whenever I enable it with CacheEnable disk / and the rest being the stock Ubuntu configuration I start to get the wrong results. When I see the main page it's fine, but when I go to a specific post I get a RSS feed instead. Like if the cache is returning the wrong content. I've disabled my RewriteRules as it seems mod_cache doesn't work with that. I'm not even sure where to start to debug such a thing. Any ideas?

    Read the article

  • "tar -cfz" versus "tar cf - | gzip": are they different? (or how to improve a backup)

    - by I'm Dario
    I want to speed up my backup done with "tar -cfz", the common way to do it. But day by day my backed up files grow so it becomes slower. I was thinking to take advantage of the several cores available in my server and I was wondering if there is any difference between doing the backup with "tar -cfz" or piping tar to gzip ("tar cf - | gzip"). I guess that there isn't any difference, because the first spawns two processes (tar and gzip), in a similar way like piping it. If there is not difference, do you know any good alternative to do this, without going incremental? I'm looking at pigz too and it looks fine.

    Read the article

  • Unable to connect to UNC share with WindowsIdentity.Impersonate, but works fine using LogonUser

    - by Rob
    Hopefully I'm not missing something obvious here, but I have a class that needs to create some directories on a UNC share and then move files to the new directory. When we connect using LogonUser things work fine with no errors, but when we try and use the user indicated by Integrated Windows authentication we run into problems. Here's some working and non-working code to give you an idea what is going on. The following works and logs the requested information: [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] private static extern bool CloseHandle(IntPtr handle); IntPtr token; WindowsIdentity wi; if (LogonUser("user", "network", "password", 8, // LOGON32_LOGON_NETWORK_CLEARTEXT 0, // LOGON32_PROVIDER_DEFAULT out token)) { wi = new WindowsIdentity(token); WindowsImpersonationContext wic = wi.Impersonate(); Logging.LogMessage(System.Security.Principal.WindowsIdentity.GetCurrent().Name); Logging.LogMessage(path); DirectoryInfo info = new DirectoryInfo(path); Logging.LogMessage(info.Exists.ToString()); Logging.LogMessage(info.Name); Logging.LogMessage("LastAccessTime:" + info.LastAccessTime.ToString()); Logging.LogMessage("LastWriteTime:" + info.LastWriteTime.ToString()); wic.Undo(); CloseHandle(token); } The following fails and gives an error message indicating the network name is not available, but the correct user name is indicated by GetCurrent().Name: WindowsIdentity identity = (WindowsIdentity)HttpContext.Current.User.Identity; using (identity.Impersonate()) { Logging.LogMessage(System.Security.Principal.WindowsIdentity.GetCurrent().Name); Logging.LogMessage(path); DirectoryInfo info = new DirectoryInfo(path); Logging.LogMessage(info.Exists.ToString()); Logging.LogMessage(info.Name); Logging.LogMessage("LastAccessTime:" + info.LastAccessTime.ToString()); Logging.LogMessage("LastWriteTime:" + info.LastWriteTime.ToString()); }

    Read the article

  • Dedicated GPU in Dell PowerEdge C1100

    - by Eli Gundry
    We recently purchased a Dell PowerEdge C1100 off lease with the initention of using it for graphics processing. We installed an AMD HD 7000 series GPU in it that runs off of board power and it sends video to the display. That said, the video is very choppy, leading us to belive that the onboard video is doing all the processing and sending it to the card. Is there any way to either disable the VGA on this server or tell the OS to only use the dedicated card. More info: The server is running REHL 6.4 The graphics running the proprietary AMD drivers The video card only works in OS and does not show the BIOS on boot (we know that it's impossible to change this) Any ideas, guys? Update We are now thinking that the GPU is doing the graphics processing, but not working at the full speed of the PCI bus. Which is odd, because it is an x16 slot, but probably optimized to use a RAID card (if that makes any sense). Is there any way to remedy the choppy graphics on this server?

    Read the article

  • Do connection string DNS lookups get cached?

    - by joshcomley
    Suppose the following: I have a database set up on database.mywebsite.com, which resolves to IP 111.111.1.1, running from a local DNS server on our network. I have countless ASP, ASP.NET and WinForms applications that use a connection string utilising database.mywebsite.com as the server name, all running from the internal network. Then the box running the database dies, and I switch over to a new box with an IP of 222.222.2.2. So, I update the DNS for database.mywebsite.com to point to 222.222.2.2. Will all the applications and computers running them have cached the old resolved IP address? I'm assuming they will have. Any suggestions along the lines of "don't have your IP change each time you switch box" are not too welcome as I cannot control this aspect of the situation, unfortunately. We are currently using the machine name of the box, which changes every time it dies and all apps etc. have to be updated with the new machine name. It hurts.

    Read the article

  • Idle hard disk makes noise.

    - by ULTRA_POROV
    Like a fan or something. I checked it. I stopped all fans (cpu, video, psu) and the noise was still there. I read online that it might be a motor or something. I have put a great deal of effort making my pc quiet. Installed a quiet psu and cpu fan, reduced the fan speed of my video card, bought a ssd... But my drive for data makes this noise. I would never have expected that. Do all hard disks make this kind of noise? I guess most people won't notice it because of the other fans they have in the system, I however can hear it quite clearly because all my other fans are almost silent. So should i get a new one or should i just live with it, considering that i might end up with a drive that also makes this noise.

    Read the article

  • Reimage PC: Myth or Fact to speeding up a slow PC?

    - by sunpech
    I have a 4-5 year old PC running Windows XP for software development at work. It struggles to run all development tools I need at the same time. Management feels I need to reimage my computer to "speed it up". The last time it was imaged was about 3 years ago. What resources books, websites, blogs, articles, etc are out there that supports/debunks this well known belief that reimaging an old PC running Windows XP will make it faster once again? A resource I remember reading is from Lifehacker.com: Windows Maintenance Tips: The Good, Bad And Useless

    Read the article

  • Run one virtual machine on a Linux server + standard Linux functions

    - by fistameeny
    Hi, I am looking for a method to setup a Linux server (running Ubuntu Server) that uses Samba for file sharing, as well as hosting a Windows virtual machine (in this case, Windows Small Business Server 2003, which in turn hosts SQL Server Express - Exchange won't be used on this). I would like to have the Linux server serving the files over Samba, and hosting the Virtual Machine. This obviously rules ESXi out as it couldn't do Samba at the same time. What would be the next best solution to give reasonable speed? Vmware Server 2.0, VirtualBox, Xen? There will be 10-15 users accessing the Samba shares and the SQL Express virtual machine. Matt

    Read the article

  • Ubuntu: Take actions when system temperature gets too high

    - by Josh
    One of the CPU fans on my Compaq Presario laptop running Ubuntu 9.10 seems to have bit the dust. The fan is deep within the case and I intend to replace the laptop in the next 6 months so it's not worth replacing it. I have the laptop on a cooling pad and most of the time the system is fine, CPU temps around 90°-110°F. Occasionally, however, I'm seeing random lockups which I believe is due to the system overheating. How can I configure the system to: Lower the CPU speed when the temperature reaches a certain level? (I.E. 110°F) Shutdown the system when the tempature reaches a critical level? (And what would that be? 130°?)

    Read the article

  • CSS/JavaScript/hacking: Detect :visited styling on a link *without* checking it directly OR do it fa

    - by Sai Emrys
    This is for research purposes on http://cssfingerprint.com Consider the following code: <style> div.csshistory a { display: none; color: #00ff00;} div.csshistory a:visited { display: inline; color: #ff0000;} </style> <div id="batch" class="csshistory"> <a id="1" href="http://foo.com">anything you want here</a> <a id="2" href="http://bar.com">anything you want here</a> [etc * ~2000] </div> My goal is to detect whether foo has been rendered using the :visited styling. I want to detect whether foo.com is visited without directly looking at $('1').getComputedStyle (or in Internet Explorer, currentStyle), or any other direct method on that element. The purpose of this is to get around a potential browser restriction that would prevent direct inspection of the style of visited links. For instance, maybe you can put a sub-element in the <a> tag, or check the styling of the text directly; etc. Any method that does not directly or indierctly rely on $('1').anything is acceptable. Doing something clever with the child or parent is probably necessary. Note that for the purposes of this point only, the scenario is that the browser will lie to JavaScript about all properties of the <a> element (but not others), and that it will only render color: in :visited. Therefore, methods that rely on e.g. text size or background-image will not meet this requirement. I want to improve the speed of my current scraping methods. The majority of time (at least with the jQuery method in Firefox) is spent on document.body.appendChild(batch), so finding a way to improve that call would probably most effective. See http://cssfingerprint.com/about and http://cssfingerprint.com/results for current speed test results. The methods I am currently using can be seen at http://github.com/saizai/cssfingerprint/blob/master/public/javascripts/history_scrape.js To summarize for tl;dr, they are: set color or display on :visited per above, and check each one directly w/ getComputedStyle put the ID of the link (plus a space) inside the <a> tag, and using jQuery's :visible selector, extract only the visible text (= the visited link IDs) FWIW, I'm a white hat, and I'm doing this in consultation with the EFF and some other fairly well known security researchers. If you contribute a new method or speedup, you'll get thanked at http://cssfingerprint.com/about (if you want to be :-P), and potentially in a future published paper. ETA: The bounty will be rewarded only for suggestions that can, on Firefox, avoid the hypothetical restriction described in point 1 above, or perform at least 10% faster, on any browser for which I have sufficient current data, than my best performing methods listed in the graph at http://cssfingerprint.com/about In case more than one suggestion fits either criterion, the one that does best wins.

    Read the article

  • Cisco asa 5505 locks up / unresponsive

    - by Chris
    We have a cisco asa 5505 that's new (in service for about 2 months) running 7.2(4) software. Every day around 10a it locks up for approx 10 minutes. We're monitoring it via snmp with stg, and snmp doesn't respond during that time. There's no output in the 'show crash' output. Internet connectivity is also dropped. Wondering if anyone else has seen this and what the fix might be. Currently we're looking at upgrading software, but will need memory upgrade for that. We've forced the speed and duplex on the internal and external interfaces, but the problem is still occurring. It's connected on the internal lan to a netgear 724 gige switch.

    Read the article

  • OpenVPN bandwith restrictrictions and cpu power needed

    - by user197664
    In Open VPN is there a way to set a maximum limit of data and speed per user? Say user "reptar' is abusing the VPN and I wanted to limit his/her speeds and/or data how would one go about doing this? Would I need to know the IP address of the abuser? Also, I have seen articles around the internet about turing a Rasberry PI in to a VPN server. If I did such a thing how many users would this device be able to handle at a given time? I believe it runs at 512 gb and clocks at around 700 mhz.

    Read the article

  • Java Playing Cards Game Framework

    - by isme
    My friends and I at uni love playing Shithead into the wee hours. But soon we graduate and will leave town, so probably won't get together for a game for a while. I want to develop a Java app we can use to play Shithead and our other favorites over a network. An app like this already exists, but is ugly, buggy and does not support our house rules. The source is available, but is such a mess that I would really rather start from scratch than try to refactor it! Building my game using some standard playing card api or framework, if such a thing exists, would be better than starting from scratch. The answer to a similar SO question was to use the JPC-API, which allegedly provides basic playing card services and rendering. But this Sourceforge project currently makes available no files or source code! Is there an alternative, or somewhere else to find this framework? Soon I will need help with the following as well: Lobby services (finding other players) Gaming network protocol (to communicate moves between players) Gaming theory (to write the computer opponent) Winning condition detection Game UI development

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • Unable to mount portable hard drive on Ubuntu

    - by VoY
    My portable hard drive (WD My Passport), which used to work correctly now does not automount on my Ubuntu system. It does work on a Windows machine or even when plugged into WD HD TV, which is a Linux based device. There's one NTFS partition spanning the whole drive. When I plug the disk in, I see the following in dmesg: [269259.504631] usb 1-2.2: new high speed USB device using ehci_hcd and address 20 [269259.604674] usb 1-2.2: configuration #1 chosen from 1 choice However it does not mount in GNOME and I don't see it when I type: sudo fdisk -l Any suggestions why this might be? I repaired the partition using chkdsk on Windows, so the issue is probably not filesystem related.

    Read the article

  • SATA Backward compatibility? [closed]

    - by Fladur
    Possible Duplicate: Can I connect a SATA-II hard drive to a SATA-I connection? Hello everyone One of my system is really old, but I still use it, right now, the HDD is and old IDE drive that is starting to fail (I got to many damaged sectors) and I'm planing to replace it. The motherboard supports Serial ATA 1 and I have a real nice offer on a SATA 2 drive, can I use that drive on a SATA 1 interface (obviously with reduced speed) just like a USB 2.0 device on a 1.0 port? Thanks in advance.

    Read the article

  • Choosing between cloud (Cloudfoundry ) and virtual servers - for developers

    - by Mike Z
    I just came across some articles on how to setup your own cloud using Cloudfoundry and Ubuntu, this got me thinking, choosing our infrastructure, if we want to use our own servers what's the advantage of cloud on virtual servers vs just using virtual servers, VPN? If we now develop for the cloud later if we need help we can quickly move on to a cloud provider, but other than that what's the advantage and disadvantage of private cloud in these areas? speed of development, testing, deployment server management security having an extra layer (cloud) will that have a hit on server performance, how big? any other advantage/disadvantage?

    Read the article

  • Drive configuration for 5 large databases

    - by Mr. Flibble
    I've got 5 databases, each 300GB, currently on a RAID 5 array consisting of 5 drives. All the databases are used heavily, at the same time, so drive speed is an issue. Would I see better performance if I got rid of the RAID 5 configuration and just put each database on a separate drive? The redundancy provided by RAID 5 is not necessary due to mirroring elsewhere. Will the server then be able to perform reads / writes to different databases drives in parallel? More so at least than when it's in RAID? This is all on Windows 2003 / SQL 2008.

    Read the article

  • .NET Framework 4.0 installation is very slow

    - by Dimitri C.
    On my Windows Vista, it takes a full 12 minutes to install the .NET Framework 4.0. a) Is this normal? b) If not, can something be done about it? The reason I'm concerned about the speed is because it slows down the testing of our product installer considerably. Testing an installer is time consuming already, but this new .NET Framework installer makes it almost undoable. Detail: I did the test on a clean Vista inside a VirtualBox virtual machine. This setup does not show any performance issues in other situations. I tried both dotNetFx40_Full_x86_x64.exe and dotNetFx40_Client_x86_x64.exe. They both take approximately the same time to install.

    Read the article

  • Netgear FVX538v2 slow whene connected to Canoga Perkins N525 ETSU

    - by Doomloard
    First of all thank you in advance for helping me. my issue is the old network admin found a problem whene he connected the firewall and the ETSU together the through put went down to less than 1 mega bit a second. his fix was to add a dlink router between the firewall and the etsu which speed it up to 5 mega bits a second. now my boss wants a more clean and proper solution if possible. i have check all the settings in the netgear it dose not seem to be a setting issue. if anyone can help that would be great.

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >