Search Results

Search found 59196 results on 2368 pages for 'time wastrel'.

Page 394/2368 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • Parallel computing for integrals

    - by Iman
    I want to reduce the calculation time for a time-consuming integral by splitting the integration range. I'm using C++, Windows, and a quad-core Intel i7 CPU. How can I split it into 4 parallel computations?

    Read the article

  • How do I create a Solution Wide Connection String

    - by Renier
    Hi. Does anyone know if it is possible to create a single connection string that will be accessible to all the projects in a solution (we have about 6). I can create a text file with this information, but we need design time support as well, and it is not practical to have a connection string in every App.Config and Web.config file in the solution. We basically want a Single connection string that is easy to change should the location of the db change, that will also be used by the IDE for design time support Regards, Renier

    Read the article

  • VMWare Workstation Linux Host performance tuning

    - by Hoghweed
    I need to improve my linux hosted vmware workstation for using multiple virtual machines at the same time. I feel very stupid I lost a great blog post link which I found last month (and I'm not able to find it again..) so I try to ask here if anyone can help me: This is my host (laptop): 16GB DDR3 Ram HDD Hybrid 750GB 7200 (8GB SSD Cache) Mint 15 x64 Kernel 3.9.7 swappiness set to 10 The above are the important things about the host. So, My need is the ability to run 2 or 3 VMs at the same time. The lack of performance is about the disk, The last time from that blog post I lost, I setup /tmp to be mounted ad a memory partition and in my previous installation that was good, now I'm not able to find a good solution to tweak the things. I think with 16GB o RAM there will be no problems to run multiple VMs, but whe they start to swap or use the /tmp things going bad (guest cursor going too fast after a freeze, guest freeze and so on) Anyone can help me to fit a good host tweak and configuration to get better performance? Thanks in advance

    Read the article

  • login control not workin after some times in asp.net

    - by manish sharma
    hello i am manish my problem is i have created a website with login control in ASP.net it works properly first few time after some time it generates an error "your login attemt was not successfully plaese try again" this type of message i got when i tried to login after some times. can anyone solve this problem. i am using sqlserver2008.

    Read the article

  • How Do You Databind Avalon DateTimePicker Start Value?

    - by discwiz
    Trying to set the start time of the Avalon DateTimePicker, but all I get is the current time. Anyone had any success with this control. FYI, I am stuck using .Net 3.0. <wf:DateTimePicker x:Name="DatePickerStartTime" DateTimeSelected="{Binding Path=StartTime,Mode=TwoWay}" > </wf:DateTimePicker> Thanks, Dave

    Read the article

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • Page cache - initiate the first page request in the server

    - by Tiago Teixeira
    Hi, I'm implementing Outputcache in my application and it works fine, but the first time always take a lot to load and the next following request will be faster... I would like to know if there is a way to initiate the page caching on the server side and serve the cached page upon the very first request, rather than have it triggered by the user one first time. Any ideas/suggestions will be very much welcome. Best regards, TT

    Read the article

  • Add Dynamic ListView Row

    - by soclose
    Hi, I could intercept ContentObserver changes at any time. In these time, I'd like to add a dynamic listview row with or without opening my application but i don't know how to implement it. Please share me some hints. Thank you.

    Read the article

  • How to improve the performance of BKPF

    - by rachu patil
    Hi Gurus, I want to get BELNR(Accounting Document Number) from BKPF table by pasing BKPF-XBLNR = VBRP-VGBEL (this is the req...) but it is taking more time resulting into time out error, how to make performance wise good, if even any BAPI is there please let me know. Thanks in advance Regards,

    Read the article

  • Check file stamp using FTP to see if it is today's file.

    - by needshelp
    Hi, I am using FTP (plain version) to download files from a server. The name of the file is always the same. All I can do to check to know if it is today's file is look at the timestamp manually. How can I write FTP script to check if the time stamp is having today's date? If you have come across this situation and have solved it, please let me know. Thank you in advance for your time.

    Read the article

  • Default Database Collations PenTesting Env

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • How to get a group of toggle buttons to act like radio buttons in WPF?

    - by code-zoop
    I have a group of buttons that should act like toggle buttons, but also as radio buttons where only one button can be selected / pressed down at a current time. It also need to have a state where none of the buttons are selected / pressed down. The behavior will be kind of like Photoshop toolbar, where zero or one of the tools are selected at any time! Any idea how this can be implemented in WPF?

    Read the article

  • Call plugin class in Java

    - by Josh Meredith
    How can I call a class in Java, when the name of the class won't be known at compile time (such as if it were a plugin). For example, from a GUI, a user selects a plugin (a Java class), the application then creates a new instance of the class, and calls one of its methods (the method name would be known at compile time (e.g. "moduleMain")). Thanks for any input.

    Read the article

  • Software Development Lifecycle

    - by j-t-s
    Hi All Our investor wants a SDLC. I've never written one before, and I don't have enough time to go and buy a book, or spend much time learning about them. But from what I'vebeen told about them, is basically that you need to list requirements (what needs to be done), and list what has already been done. Is this correct? thank you

    Read the article

  • Why can't I route to some sites from my MacBook Pro that I can see from my iPad? [closed]

    - by Robert Atkins
    I am on M1 Cable (residential) broadband in Singapore. I have an intermittent problem routing to some sites from my MacBook Pro—often Google-related sites (arduino.googlecode.com and ajax.googleapis.com right now, but sometimes even gmail.com.) This prevents StackExchange chat from working, for instance. Funny thing is, my iPad can route to those sites and they're on the same wireless network! I can ping the sites, but not traceroute to them which I find odd. That I can get through via the iPad implies the problem is with the MBP. In any case, calling M1 support is... not helpful. I get the same behaviour when I bypass the Airport Express entirely and plug the MBP directly into the cable modem. Can anybody explain a) how this is even possible and b) how to fix it? mella:~ ratkins$ ping ajax.googleapis.com PING googleapis.l.google.com (209.85.132.95): 56 data bytes 64 bytes from 209.85.132.95: icmp_seq=0 ttl=50 time=11.488 ms 64 bytes from 209.85.132.95: icmp_seq=1 ttl=53 time=13.012 ms 64 bytes from 209.85.132.95: icmp_seq=2 ttl=53 time=13.048 ms ^C --- googleapis.l.google.com ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 11.488/12.516/13.048/0.727 ms mella:~ ratkins$ traceroute ajax.googleapis.com traceroute to googleapis.l.google.com (209.85.132.95), 64 hops max, 52 byte packets traceroute: sendto: No route to host 1 traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 *traceroute: sendto: No route to host traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 ^C mella:~ ratkins$ The traceroute from the iPad goes (and I'm copying this by hand): 10.0.1.1 119.56.34.1 172.20.8.222 172.31.253.11 202.65.245.1 202.65.245.142 209.85.243.156 72.14.233.145 209.85.132.82 From the MBP, I can't traceroute to any of the IPs from 172.20.8.222 onwards. [For extra flavour, not being able to access the above appears to stop me logging in to Server Fault via OpenID and formatting the above traceroutes correctly. Anyone with sufficient rep here to do so, I'd be much obliged.]

    Read the article

  • SQL 2008 Backups to UNC Share Failing 0xC002F210

    - by Matty Brown
    This problem is driving me NUTS!! We take backups of all of our production databases to a network share, which are then backed up to tape nightly. 8pm Mon-Fri - Full backup, followed by log backup 7am-7pm Mon-Fri, at half-hour interval - Log backup Our backups have been working in this manner since we migrated from SQL Server Standard 2000 to 2008, 3 years ago. Recently, the first log backup on Mondays have been failing. Not every time, but almost every time! The rest of the week, we've had no problems. I guess the issue may have something to do with the size of the log backup that's attempted after a weekend of no backups. Now onto the issue I need a fix for... All this week, every full backup on our biggest two databases have failed (Both backups < 1GB compressed). There's plenty of disk space on the source and destination servers. I'm guessing the issue is to do with the amount of time it takes to complete the backups of these databases, and/or the size of the backup files required to complete these backups. Changing the backup destination to local storage works fine (and very, very fast in comparison). From the Job History, I can find a few hints as to what the problem could be... Code: 0xC002F210 (Always this code, but a mix of the following descriptions...) "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'SetEndOfFile' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. "The operating system returned the error '64(failed to retrieve text for this error. Reason: 1815)' while attempting 'FlushFileBuffers' on '\drserver\SQLBackups\Database.bak'. BACKUP DATABASE is terminating abnormally. Please help save my hair and sanity!!

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Netbeans + Xdebug + php not working

    - by Yargon
    My netbeans does not work the breakpoints using xdebug, my configuration looks correct, so I configured the first time I ran up to stop debugging the first time since then has never worked, someone had this problem? The version of my netbeans is 6.8 and the version of php is 2.5.2. my php.ini: zend_extension_ts = d:\wamp\bin\php\php5.2.5\ext\php_xdebug-2.0.2-5.2.5.dll xdebug.remote_enable=on xdebug.remote_handler=dbgp xdebug.remote_host=localhost xdebug.remote_port=9000 xdebug.idekey=netbeans-xdebug xdebug.profiler_enable=1

    Read the article

  • What does it mean when git says a file "needs update"?

    - by endtime
    I can't for the life of me find any decent explanation of the "[file]: needs update" message that git sometimes spits out from time to time. Even the official git FAQ has explaining this marked as a TODO. If someone could explain A) what it means; and B) how to fix it, I would be extremely grateful.

    Read the article

  • Quartz Cron Trigger with Spring - triggering new cron before last ended

    - by Trick
    Simple question, I think. I have org.springframework.scheduling.quartz.CronTriggerBean triggering one job once a day. Because this method can last a long time (over 24 hours), will the next day at the same time a new job be executed if the last one is not ended yet? If yes - is it possible to turn off executing new jobs until the last one is finished? My method is trans-coding videos and some days there a lot of videos and could last long.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >