Search Results

Search found 61830 results on 2474 pages for 'efficient time use'.

Page 91/2474 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • At what point asynchronous reading of disk I/O is more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • How do I break down an NSTimeInterval into year, months, days, hours, minutes and seconds on iPhone?

    - by willc2
    I have a time interval that spans years and I want all the time components from year down to seconds. My first thought is to integer divide the time interval by seconds in a year, subtract that from a running total of seconds, divide that by seconds in a month, subtract that from the running total and so on. That just seems convoluted and I've read that whenever you are doing something that looks convoluted, there is probably a built-in method. Is there? I integrated Alex's 2nd method into my code. It's in a method called by a UIDatePicker in my interface. NSDate *now = [NSDate date]; NSDate *then = self.datePicker.date; NSTimeInterval howLong = [now timeIntervalSinceDate:then]; NSDate *date = [NSDate dateWithTimeIntervalSince1970:howLong]; NSString *dateStr = [date description]; const char *dateStrPtr = [dateStr UTF8String]; int year, month, day, hour, minute, sec; sscanf(dateStrPtr, "%d-%d-%d %d:%d:%d", &year, &month, &day, &hour, &minute, &sec); year -= 1970; NSLog(@"%d years\n%d months\n%d days\n%d hours\n%d minutes\n%d seconds", year, month, day, hour, minute, sec); When I set the date picker to a date 1 year and 1 day in the past, I get: 1 years 1 months 1 days 16 hours 0 minutes 20 seconds which is 1 month and 16 hours off. If I set the date picker to 1 day in the past, I am off by the same amount. Update: I have an app that calculates your age in years, given your birthday (set from a UIDatePicker), yet it was often off. This proves there was an inaccuracy, but I can't figure out where it comes from, can you?

    Read the article

  • tiemout for a function that waits indefiinitely (like listen())

    - by Fantastic Fourier
    Hello, I'm not quite sure if it's possible to do what I'm about to ask so I thought I'd ask. I have a multi-threaded program where threads share a memory block to communicate necessary information. One of the information is termination of threads where threads constantly check for this value and when the value is changed, they know it's time for pthread_exit(). One of the threads contains listen() function and it seems to wait indefinitely. This can be problematic if there are nobody who wants to make connection and the thread needs to exit but it can't check the value whether thread needs to terminate or not since it's stuck on listen() and can't move beyond. while(1) { listen(); ... if(value == 1) pthread_exit(NULL); } My logic is something like that if it helps illustrate my point better. What I thought would solve the problem is to allow listen() to wait for a duration of time and if nothing happens, it moves on to next statement. Unfortunately, none of two args of listen() involves time limit. I'm not even sure if I'm going about the right way with multi-threaded programming, I'm not much experienced at all. So is this a good approach? Perhaps there is a better way to go about it? Thanks for any insightful comments.

    Read the article

  • Slightly different execution times between python2 and python3

    - by user557634
    Hi. Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4"). I was curious about the differences in execution time of it between python2 and python3. Here is my function: def perms(s): s = tuple(s) N = len(s) if N <= 1: yield s[:] raise StopIteration() for x in perms(s[1:]): for i in range(0,N): yield x[:i] + (s[0],) + x[i:] I tested both using timeit module. My tests: $ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py python2.6: args time[ms] 1 0.003811 2 0.008268 3 0.015907 4 0.042646 5 0.166755 6 0.908796 7 6.117996 8 48.346996 9 433.928967 10 4379.904032 python3: args time[ms] 1 0.00246778964996 2 0.00656183719635 3 0.01419159912 4 0.0406293644678 5 0.165960511097 6 0.923101452814 7 6.24257639835 8 53.0099868774 9 454.540967941 10 4585.83498001 As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better. As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2? Thank you in advance for kind answer :)

    Read the article

  • How do I break down an NSTimeInterval into year, months, days, hours, minutes and seconds on iPhone?

    - by willc2
    I have a time interval that spans years and I want all the time components from year down to seconds. My first thought is to integer divide the time interval by seconds in a year, subtract that from a running total of seconds, divide that by seconds in a month, subtract that from the running total and so on. That just seems convoluted and I've read that whenever you are doing something that looks convoluted, there is probably a built-in method. Is there? I integrated Alex's 2nd method into my code. It's in a method called by a UIDatePicker in my interface. NSDate *now = [NSDate date]; NSDate *then = self.datePicker.date; NSTimeInterval howLong = [now timeIntervalSinceDate:then]; NSDate *date = [NSDate dateWithTimeIntervalSince1970:howLong]; NSString *dateStr = [date description]; const char *dateStrPtr = [dateStr UTF8String]; int year, month, day, hour, minute, sec; sscanf(dateStrPtr, "%d-%d-%d %d:%d:%d", &year, &month, &day, &hour, &minute, &sec); year -= 1970; NSLog(@"%d years\n%d months\n%d days\n%d hours\n%d minutes\n%d seconds", year, month, day, hour, minute, sec); When I set the date picker to a date 1 year and 1 day in the past, I get: 1 years 1 months 1 days 16 hours 0 minutes 20 seconds which is 1 month and 16 hours off. If I set the date picker to 1 day in the past, I am off by the same amount. Update: I have an app that calculates your age in years, given your birthday (set from a UIDatePicker), yet it was often off. This proves there was an inaccuracy, but I can't figure out where it comes from, can you?

    Read the article

  • Running mod_php and suPHP same time

    - by BHare
    I recently went from Debian Lenny with 5.2.x and was able to use mod_php for any php files that were not located in /home/ and suPHP for all the php files that were located in /home/. I did this because I needed a default php.ini (given me all features of php) for my websites in /var/www/ and I didn't want to have to change the owner of all the .php files from root. I also had a default php.ini for all the /home/ php files without dangerous features. This was I had setup: <IfModule mod_suphp.c> <Directory /home/> AddType application/x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler application/x-httpd-php suPHP_Engine on suPHP_ConfigPath /home/shared/ </Directory> </IfModule> This was working perfect, but recently I upgraded to PHP to 5.3.5 from dotdeb (Lenny has no official php 5.3) . This had weird issues on lenny such as not display errors correctly and little tid bits. So I decided to upgrade from lenny to squeeze. Uninstalled php (along with it came suphp) and reinstalled with the new source. I now have 5.3.3-7 with Debian Squeeze but I cannot get mod_php and suPHP to run at the same time anymore. mod_php will always work and there are no errors in apache2 or suphp logs. If I disabled mod_php then suPHP will work. Is there thing I am doing wrong?

    Read the article

  • Can't connect two PCs to a Network Switch at the same time (Windows 7)

    - by puk
    I have two computers connected to a network switch and every once in a while one of the computers will lose its internet connection. It's almost always the same computer every time. However, if I play around with the control panel, I can switch it, so that now the other computer is not connected. Restarting either of the computers does not help either. In Windows, the worlds-greatest-trouble-shooter tells me that a network cable is unplugged and that I should try plugging it in...Disabling and re-enabling my NIC does not fix this problem, neither does swapping cables around. When rebooting, the BIOS complains about how the Ethernet Cable is not plugged in. If it's in any way important, My set up at the office is like so: Modem - Routher - Network Switch 1 - Network Switch 2. I have tried turning off the energy saving option for my NIC, and I tried manually setting the link-speed to 100Mbps Full Duplex without any luck. Also, I have a Realtek PCIe GBE Family controller on both computers Does anyone have any idea why this is happening every 5-10 days? EDIT: I have also tried using a completely different Network Switch and the problem still persists as before.

    Read the article

  • using pf for packet filtering and ipfw's dummynet for bandwidth limiting at the same time

    - by krdx
    I would like to ask if it's fine to use pf for all packet filtering (including using altq for traffic shaping) and ipfw's dummynet for bandwidth limiting certain IPs or subnets at the same time. I am using FreeBSD 10 and I couldn't find a definitive answer to this. Googling returns such results as: It works It doesn't work Might work but it's not stable and not recommended It can work as long as you load the kernel modules in the right order It used to work but with recent FreeBSD versions it doesn't You can make it work provided you use a patch from pfsense Then there's a mention that this patch might had been merged back to FreeBSD, but I can't find it. One certain thing is that pfsense uses both firewalls simultaneously so the question is, is it possible with stock FreeBSD 10 (and where to obtain the patch if it's still necessary). For reference here's a sample of what I have for now and how I load things /etc/rc.conf ifconfig_vtnet0="inet 80.224.45.100 netmask 255.255.255.0 -rxcsum -txcsum" ifconfig_vtnet1="inet 10.20.20.1 netmask 255.255.255.0 -rxcsum -txcsum" defaultrouter="80.224.45.1" gateway_enable="YES" firewall_enable="YES" firewall_script="/etc/ipfw.rules" pf_enable="YES" pf_rules="/etc/pf.conf" /etc/pf.conf WAN1="vtnet0" LAN1="vtnet1" set skip on lo0 set block-policy return scrub on $WAN1 all fragment reassemble scrub on $LAN1 all fragment reassemble altq on $WAN1 hfsc bandwidth 30Mb queue { q_ssh, q_default } queue q_ssh bandwidth 10% priority 2 hfsc (upperlimit 99%) queue q_default bandwidth 90% priority 1 hfsc (default upperlimit 99%) nat on $WAN1 from $LAN1:network to any -> ($WAN1) block in all block out all antispoof quick for $WAN1 antispoof quick for $LAN1 pass in on $WAN1 inet proto icmp from any to $WAN1 keep state pass in on $WAN1 proto tcp from any to $WAN1 port www pass in on $WAN1 proto tcp from any to $WAN1 port ssh pass out quick on $WAN1 proto tcp from $WAN1 to any port ssh queue q_ssh keep state pass out on $WAN1 keep state pass in on $LAN1 from $LAN1:network to any keep state /etc/ipfw.rules ipfw -q -f flush ipfw -q add 65534 allow all from any to any ipfw -q pipe 1 config bw 2048KBit/s ipfw -q pipe 2 config bw 2048KBit/s ipfw -q add pipe 1 ip from any to 10.20.20.4 via vtnet1 out ipfw -q add pipe 2 ip from 10.20.20.4 to any via vtnet1 in

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • How to auto-cc a system email account any time a user creates an appointment

    - by Ferdy
    I will not bother explaining my full architecture or reasons for wanting this in order to keep this question short: Is it possible to auto-cc a certain email account any time a Exchange user creates an appointment or meeting in his own calendar? Is it possible using rules? Our Exchange 2007 server is outsourced, I cannot change the configuration or install plugins server-side Preferably, it still should work server-side, because users may use the Outlook client but also Outlook Web Access Is there any other way, perhaps using group policies? My conclusion so far is that the only viable way to accomplish this is to build an Outlook add-on. The problem there is that it will need to be managed for thousands of desktop users and that the add-on will not work when using another client (OWA, mobile). An alternative architecture could be to pull the information from the user's calendar on a scheduled basis. Given that we are talking about a lot of users, scalability is a major issue, this has also been confirmed by Microsoft. Can you confirm that my thinking is correct or do you have any other solutions?

    Read the article

  • Task Manager Does Not Start Every Time

    - by diek
    I have had a problem that started some time ago, 6 months maybe. I should have noted the first instance but I didn't. I am using Windows 7 Pro, 32bit. Under normal circumstances I can open up the the Task Manager, via the task bar or cntrl alt del. When I get a program stuck, causing a freeze or non-responsive system I try to open the task manager. It will not work. I have had plenty of similar problems in the past and I had no trouble getting it open. I have searched the internet but the only results I can find are when the task manager will not start under any situation. I am running ESET NOD32 as the anti-virus. The latest example happened when I opened a new tab in Google and tried to copy an image. Google accounts for at least 50% of the examples. Ran System File Checker tool, sfc /scannow as recommended on another post. No errors returned. Any guidance would be appreciated.

    Read the article

  • Can't do more than one activity at a time after switching modems

    - by vallorn
    I had to replace the Motorola 2210 DSL modem that I got when I signed up for AT&T DSL Direct a few years ago. The modem kept randomly restarting and it eventually gave out on me. I am assuming overheating was the cause here because it was almost too hot to touch. In any case, I replaced it with a Netgear DM111PSP. It works fine but I can't do more than one activity at a time with it. If my wife is watching Netflix, there is a noticeable delay/latency when trying to view web sites. It's even worse if I try to play an on-line game while she's streaming; the game is basically unplayable. The odd thing is, the only other activity I can do while she's streaming is stream another Netflix show myself. There is no delay when doing that, no buffering either. I'm not a networking guy so maybe there is an explanation for it but I find that kind of odd. I've tried using QoS through my Buffalo N600 wireless router and it doesn't seem to help. With the old Motorola modem, she could be watching Netflix while I play a game and everything worked just fine. Is there anything I can check or reconfigure possibly on the modem that would account for this? Should I just ditch the Netgear and get another modem instead? I have the Netgear modem connected to the Buffalo router in a bridged mode. Its the same exact setup as I had with the Motorola and as far as I can tell, it's not the router that is the cause.

    Read the article

  • MySQL taking a long time to start

    - by Dscoduc
    I'm running Windows Server 2008 with MySQL installed and every time I reboot the server the MySQL Service doesn't start right away. A look into the Windows Eventlog shows that the MySQL Service was hung at startup. Looking at the Services.msc console shows the service state at Starting... Eventually, like 10 minutes, the MySQL Service actually finishes the startup process and the database becomes available for my Wordpress server... I looked at the MySQL .err files and didn't find anything that would indicate a delay in the statup process... Can anyone suggest a way to determine what is causing the delay, and more importantly, how to prevent the delay in the MySQL Startup? UPDATE: Here is the .err log contents from the shutdown to the startup complete. Notice the startup begins at 10:30:00 and the MySQL isn't ready for connections until 10:47:14, a full 17 minutes later: 100322 10:27:06 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Normal shutdown 100322 10:27:06 [Note] Event Scheduler: Purging the queue. 0 events 100322 10:27:06 InnoDB: Starting shutdown... 100322 10:27:08 InnoDB: Shutdown completed; log sequence number 4 3854351346 100322 10:27:08 [Warning] Forcing shutdown of 1 plugins 100322 10:27:08 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: Shutdown complete 100322 10:30:00 [Note] Plugin 'FEDERATED' is disabled. 100322 10:30:01 InnoDB: Started; log sequence number 4 3854351346 100322 10:47:14 [Note] Event Scheduler: Loaded 0 events 100322 10:47:14 [Note] C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld: ready for connections. UPDATE 2: MySQL is configured as a service (part of the install process, nothing I did) and executes the following syntax (as it appears in the registry): "C:\Program Files\MySQL\MySQL Server 5.1\bin\mysqld" --defaults-file="C:\Program Files\MySQL\MySQL Server 5.1\my.ini" MySQL

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • first time setting up ssl, running into a strange problem, tutorials haven't been too helpful

    - by pedalpete
    This is my first time trying to set-up an ssl for one a site, and I'm running it on a server that has 3 other sites already hosted. I'm running apache2.?? and the install came with an ssl.conf page. The ssl.conf has the following settings LoadModule ssl_module modules/mod_ssl.so Listen 443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl <VirtualHost *:443> ServerAdmin [email protected] DocumentRoot /var/www/html/securesite ServerName securesite.com ErrorLog logs/securesite-error_log CustomLog logs/securesite-access_log common SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/securesite.com.crt SSLCertificateKeyFile /etc/httpd/ssl.key/server.key SSLCertificateChainFile /etc/httpd/ssl.crt/gd_bundle.crt </VirtualHost> When I run 'apachectl configtest', I don't get any errors, but running 'apachectl -k restart', I get 'httpd not running, trying to start'. I have two questions 1) Is there an error in the way I'm defining my virtualhost for 443?? the rest of my entries point to <VirtualHost *:80. When I comment out the above entry, apache runs fine. 2) do I need to set-up a redirect from port 80 for secure site? Because most users are going to go to http: or www. , and I need to send them to https: does apache do this automatically? or do i need to create an entry with a redirect?

    Read the article

  • Very long (>300s) request processing time on Apache Server serving static content from particular IP

    - by Ron Bieber
    We are running an Apache 2.2 server for a very large web site. Over the past few months we have been having some users reporting slow response times, while others (including our resources, both on the internal network and our home networks) do not see any degradation in performance. After a ton of investigation, we finally found a "Deny from none" statement in our configuration that was causing reverse DNS lookups (which were timing out) that solved the bulk of our issues, but we still have some customers that we are seeing in the Apache logs (using %D in the log format) with request processing times of 300s for images, css, javascript and other static content. We've checked all Deny / Allow statements for reoccurrence of "none", as well as all other things we know of that would cause reverse DNS lookups (such as using "REMOTE_HOST" in rewrite rules, using %a instead of %h in our log format configuration) as well as verified that HostnameLookups is set to "Off". As an aside, we've also validated that reverse DNS lookups for folks having this problem do not time out - so I'm fairly certain DNS is not an issue in this case. I've run out of ideas. Are there any Apache configuration scenarios that someone can point me to that I might be missing that would cause request times for static content to take so long only for certain users? Thank you in advance.

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • Processing-time billing in Amazon EC2

    - by Rafael Almeida
    Hi all! I think my question is fairly basic, but I would like a clarification: in the Pricing part of AWS we can see that Amazon charges people around .10 by the 'instance computing hour'. I've seen in a blog post somewhere (can't remember where exactly, and even if I did I think it was in Portuguese anyway) that this way your minimum monthly payment would be $72 (= .10 $s/hour x 24 hours x 30 days). Is this correct? (I don't think it is!) In my understanding is that this 'virtual computing time' is only used when your machine is actually doing something (serving pages, serving the admin via ssh, whatever), so real billable usage would be less than 720 hours/month in most webserver scenarios. Is my view correct? If it is, then it leads me to another question: is it economically interesting to buy access to one of these instances for testing? I mean, would I have the 'freedom' to 'forget' about it for a month and receive a very-close-to-zero (as in, a few cents) bill? Do you do it/know of anybody who does? Any thoughts on the matter (as in, "yes, it's a good idea", or "yes, but there's this 'gotcha': ...", or "no, nobody does it because of...")? PS: sorry for the loong question text. I highlighted the main questions for easy view. Also, I'm not sure if this question is actually more than one and if it's desirable for the community, so, sorry if it is too! Thanks in advance!

    Read the article

  • winbind failing after a semi-random amount of time

    - by The Digital Ninja
    I have winbind set up to authenticate to our AD for samba shares. This is the third such server, and the only one having any issues. It seems after a random amount of time samba shares will just stop working. Winbind processes seem to be running but restarting them seems to fix the issue for a while. Looking at the logs have been kind of hit an miss and I don't know exactly when it fails. One interesting thing is that it seems to be pulling from another domain controller that it shoudlnt. I censored out the domain name in this example. But isnt there some way to block authentication to a domain? I'm not sure if this is a symptom or a cause of the issue. [2010/10/18 08:02:10, 0] winbindd/winbindd_cache.c:initialize_winbindd_cache(2577) initialize_winbindd_cache: clearing cache and re-creating with version number 1 [2010/10/18 09:15:54, 1] libsmb/clikrb5.c:ads_krb5_mk_req(686) ads_krb5_mk_req: krb5_get_credentials failed for [email protected] (Cannot find KDC for requested realm) [2010/10/18 09:15:54, 1] libsmb/cliconnect.c:cli_session_setup_kerberos(624) cli_session_setup_kerberos: spnego_gen_negTokenTarg failed: Cannot find KDC for requested realm [2010/10/18 09:15:54, 0] lib/util_sock.c:write_data(1139) write_data: write failure. Error = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:write_socket(242) write_socket: Error writing 108 bytes to socket 18: ERRNO = Connection reset by peer [2010/10/18 09:15:54, 0] libsmb/clientgen.c:cli_send_smb(290) Error writing 108 bytes to client. -1 (Connection reset by peer)

    Read the article

  • Browser says "Waiting for www.xyz.com" for a very long time

    - by Phil
    When I load my website (hosted with Ipage). The browser often takes an incredible long time saying "Waiting for www.xyz.com ..." before any elements of the site actually appear. After this "Waiting for" process, the text, images and everything else actually load quite fast. I contacted my host with my tracert result and they said they optimized my website database and increased the memory available to PHP on my account to 64 Mb.They also said they have checked the issue by accessing my website and found that it is loading fine without any slowness. It seems to be a temporary issue. Please try to access your website with different browser and network. I tried different browsers and networks but this "Waiting for" process always takes too long. My website is http://www.surreyextra.com/ . It's Wordpress and BuddyPress. I'm in the UK while Ipage host is placed in the USA, can this potentially be a problem? I have tried a number of optimizations, like minifying my CSS and JS files and use catching but the problem hasn't improved. So is it my host's fault, should I contact them again?

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • Firefox takes a really long time to load some sites on Ubuntu

    - by Dave
    Hello guys, I have an issue here. Some sites - just a few - takes a really long time to load on Firefox. One example is A List Apart (http://www.alistapart.com/) which takes more than 30 minutes (yes, minutes, not seconds). On Opera, ou even through a telnet session, the problematic sites run without problem, fast as expected. I am using Linux 8.04, running Firefox 3.6.3 downloaded from mozilla site, with a 10M ADSL connection. I tried many tweaks I found googling, like disable IPv6, and change http pipelining settings on FF's about:config. None worked. I also used Firebug to find what phase during negotiation is the bottleneck. Findings are in the screenshot. Well guys, any idea what is the issue? And how to solve it? I repeat, this only happens with firefox (3.6.3 and prior versions), for a few sites only (even sites with much more requests, images, javascripts, stylesheets work fine), and http pipelines and IPv6 tweaks on about:config didn't work. Thanks

    Read the article

  • Please help to find a solution for two way, real-time synchronization on Centos 5.5 64Bit

    - by Vipul Limbachiya
    I am in need of a real time, two way synchronization software for Centos 5.5 / 64Bit. Here's little explanation: It needs to be able to perform: Two way synchronization. It must be realtime. By realtime means it can be almost realtime, i.e. a delay of 1 second for example is fine. And the folders are on the same server. I am currently using GlusterFS across two webservers. However, it has extremely poor small file read performance and it's slowing down my website. There's nothing more that can be done to improve this, I have already tested many configurations. As a solution, I was going to mount a RAM drive (tmpfs) that mirrors the GlusterFS web files but get the webserver to use the RAM drive. The issue is that I need two way realtime mirroring or replication between glusterfs and the RAM drive. I need this is as Apache writes files as wells. As I said, realtime two way synchronization across two folders. Which are in fact 2 different mounts points. The RAM (tmpfs) mount poing and the GlusterFS mount point. I already know about: Rsync - Which is one way Unison - Which is not realtime Please suggest me any solution free or paid. Thanks in advance

    Read the article

  • VPS goes slow at more than 20 users online at the same time

    - by hachiari
    I have 512 MB VPS (brustable to 1GB) Somehow, the site goes slow when there are about 10 users, and becomes impossible to load at 20 users online at the same time. I wonder what could be the problem for this. The bandwidth connection of the VPS is 1Gbps. Here is some settings in my VPS: KeepAlive Off <IfModule prefork.c> StartServers 7 MinSpareServers 7 MaxSpareServers 10 ServerLimit 64 MaxClients 64 MaxRequestsPerChild 0 </IfModule> my.cnf settings - calculated Max Memory 300MB Output from UNIXBENCH INDEX VALUES TEST BASELINE RESULT INDEX Dhrystone 2 using register variables 376783.7 13429727.4 356.4 Double-Precision Whetstone 83.1 1137.5 136.9 Execl Throughput 188.3 1637.4 87.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 148868.0 557.1 File Copy 256 bufsize 500 maxblocks 1077.0 79430.0 737.5 File Read 4096 bufsize 8000 maxblocks 15382.0 1410009.0 916.7 Pipe Throughput 111814.6 4419722.0 395.3 Pipe-based Context Switching 15448.6 561505.1 363.5 Process Creation 569.3 10272.7 180.4 Shell Scripts (8 concurrent) 44.8 514.3 114.8 System Call Overhead 114433.5 3537373.8 309.1 ========= FINAL SCORE 295.0 I am afraid that the VPS company limit the number of connection to the VPS... is it possible? The server is in Japan, but the site has global traffic (some of the traffic are from countries with low speed connection). Could this be the problem? This is a serious problem :( my site just cant grow if this keeps on happening... please tell me if you have any idea. Thank You, Bryant

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >