Search Results

Search found 25246 results on 1010 pages for 'cache control'.

Page 220/1010 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Hide Drive / Avoid Low Diskspace Warning on ReadyBoost Cache?

    - by Simon Richter
    I've just added an SSD as a ReadyBoost cache drive, and have two minor cosmetic issues with it: the drive still shows up in the drive list I get a warning balloon every five minutes that the drive is full and that I should empty the Recycle Bin. The former is ignorable (and I guess I can solve it with a group policy); the latter is somewhat going on my nerves. Are there official buttons "hide ReadyBoost drives" and "do not warn on low diskspace for ReadyBoost drives" somewhere that I may have missed? If not, I guess I can use the group policy to hide the drive; I'd still need a way for the system to not warn about the drive being full. Also, am I right that I need to assign a drive letter and format the drive with NTFS to use it for ReadyBoost, or is there a way to just use the raw device?

    Read the article

  • Cannot get nscd to run. DNS cache stale as a result

    - by Phunt
    I'm trying to troubleshoot an issue on a MediaTemple server (running CentOS5) where the DNS cache has grown stale - I think because nscd has crashed. I've tried restarting nscd: # service nscd restart Stopping nscd: [FAILED] Starting nscd: [ OK ] This makes sense since I believe nscd has crashed so it shouldn't already be running, but When I view the status of nscd: # service nscd status nscd dead but subsys locked And ps -A returns no processes related to nscd (I assume because it's dead). I've edited /etc/nscd.conf and uncommented the line that defines the location for the log file. It created the file but it never writes anything to it. I tried looking at the init script but found that it's no help since the script thinks everything is running fine - the service returns that it started up correctly. How do I 'unlock' the subsys that nscd is complaining about?

    Read the article

  • How do I purge or empty Windows Explorer's network username and sharename cache?

    - by Abel
    While troubleshooting a Samba vs Windows Network issue, I noticed that Windows' Explorer remembers login credentials of remote shares, even if you ask it not to. For instance, after accessing a share using \\servername\sharename plus entering username/password and then closing Windows Explorer, adding the same share as a network drive gives the following message, regardless whether the username is the same or not: The network folder specified is currently mapped using a different user name and password. To connect using a different user name and password, first disconnect any existing mappings to this network share. Using NET USE does not show the share. After restarting the computer, I have no problems accessing the share using different credentials. But restarting just for testing other credentials is annoying, esp. while troubleshooting. How can I purge this cache, using Windows Vista? Note: using nbtstat -R[R], ipconfig /renew, killing explorer.exe or disabling / re-enabling the network card didn't help.

    Read the article

  • Database backup regardless of backup made through a control panel?

    - by developer
    I know that all CMS or CMC platforms have some sort of walkthrough for their users to fully backup and restore the database or the whole website. While we all can perform such backups (and there are even plugins which automate the whole procedure) and restore them in necessity (such as when we migrate to a new server), one can also backup the whole website by means of such control panels as Directadmin or Cpanel. Now, I just want to know if it is necessary for us to do database or website backups the way they are described by a specific CMC or CMS developer even after we perform a whole website backup in a control panel such as DA? An example of such CMS platforms is moodle. Moodle Docs describes how we can backup and restore moodle in here. So do we, out of necessity, have to make the backup the way described, or we can simply do it the Control panel way? Thanks

    Read the article

  • Map keycode 133+54 to shift+control+c in Linux for Mac keyboard?

    - by Edward_178118
    On Linux Mint 13 - Mate using the Terminal program Terminator with a Mac keyboard. I want the command key for COPY/PASTE to behave as it does on the Mac. I have been able to change it to treat the command key as a control key, and this works fine for most apps except in the Terminal program. Using xev when I press command+c it's a keycode of 133 + 54. This is a ^c to the Terminal app which acts like a ^c in a shell. The default for COPY which can be changed in Terminator is Shift+Control+c. Is there a way to map the keycode of 133 + 54 to Shift+Control+c, but only for the Terminator app? Thanks!

    Read the article

  • How can I use Android as a remote control for streaming?

    - by michael
    I would like to know if I can use Android to remote control streaming on my laptop? I would like to use my laptop as my streaming server and use my HDTV to view the stream. And I need some way to remote control my streaming server. I have read about http://maketecheasier.com/install-vlc-shares-in-Ubuntu-and-stream-videos-to-Android/2011/02/25 and http://code.google.com/p/android-vlc-remote/ but those are streaming to Android phone itself. I am just need something to remote control streaming to my TV. Is that possible?

    Read the article

  • In Word 2010, how can I insert a control that updates a document property when the content is edited?

    - by michielvoo
    In Word 2010 you can insert document properties from the Insert ribbon. For example: Insert > Text > Quick Parts > Document Property > Subject If you do this a control will be added with the following placeholder text: [Subject] Notice the square brackets around the word Subject. These square brackets are not present in the placeholder text for manually inserted controls (which can be inserted using the Developer ribbon). When a user opens the document, replaces the placeholder text with his own text, the document metadata is updated. This behavior is different from a field which can only be updated by first updating the metadata. Unfortunately the range of document properties that can be added to the document is limited, and I would like to add other (custom) properties this way as well. How can I manually insert a control that will update document metadata with the content entered in the control?

    Read the article

  • Caching for database questions.

    - by SeanD
    When we say caching like using memcahe or Redis, is this a 1:1 caching between the user and the cache or can we cache 1 item and use it for all user? Some items like a Friend list will be 1:1 a that is unique per user. But if i want to cache the auto complete list for city lookups which can be used by any user, will it just store 1 list in the cache used by all users at same time or doe it need to store 1 list per user? Is it possible to cache the entire database, all the lookups, all the users, all their photos, etc using memache or redis? So from the above example: a friend list will be cleared from the cache when the user logs off. But something like city auto complete will stay in the cache 24-7-365, am i correct?

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Abstract base class for Winforms-Control and VS2008 Designer support?

    - by BeowulfOF
    Hi, I engadged a problem with inherited Controls in WinForms, and need some advice on it. I do use a base class for items in a List (selfmade GUI list made of a panel) and some inherited controls that are for each type of data that could be added to the list. There was no problem with it, but I know found out, that it would be right, to make the base-control an abstract class, since it has methods, that need to be implemented in all inherited controls, called from the code inside the base-control, but must not and can not be implemented in the base class. When I mark the base-control as abstract, the VS2008 Designer refuses to load the window. Is there any way to get the Designer work with the base-control made abstract?

    Read the article

  • What is the equivalent of ASP:Timer control in ASP.NET MVC?

    - by Rupa
    Hi Recently i am working on migrating the ASP.NET Web application to MVC. I am wondering if there is any equivalent of ASP:Timer control in ASP.NET MVC Framework. So that this Timer control can automatically check for a particular value from the Database once in every couple of Seconds(that we specify). If there is no equivalent of Timer control in MVC, Are there any other ways to implement this. Appreciate your responses. Thanks

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • failed to load viewstate. the control tree into which viewstate

    - by Mohan
    Hi Everybody, i have a gridview control . When I change page of gridview control i get this error. Server Error in '/' Application. Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request. how to tackle with this problem. It's urgent Any suggestion? Thanks in advance.

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • How to prepare KML file for Android Emulator Control?

    - by AndroiDBeginner
    I am trying to test my application with location information. You know the Emulator Control has an ability to load from KML file. (Eclipse - DDMS - Emulator Control - Location Controls - KML - Load KML...) I've prepared KML file using Google earth application with its "Add path". Then saved it by .kml extension and load it on the Eclipse. Eclipse didn't load this KML file. How to prepare KML file for Android Emulator Control?

    Read the article

  • ASP.NET Web User Control with Javascript used multiple times on a page - How to make javascript func

    - by Jason Summers
    I think I summed up the question in the title. Here is some further elaboration... I have a web user control that is used in multiple places, sometimes more than once on a given page. The web user control has a specific set of JavaScript functions (mostly jQuery code) that are containted within *.js files and automatically inserted into page headers. However, when I want to use the control more than once on a page, the *.js files are included 'n' number of times and, rightly so, the browser gets confused as to which control it's meant to be executing which function on. What do I need to do in order to resolve this problem? I've been staring at this all day and I'm at a loss. All comments greatly appreciated. Jason

    Read the article

  • Cross-thread operation not valid: Control accessed from a thread other than the thread it was create

    - by SilverHorse
    I have a scenario. (Windows Forms, C#, .NET) There is a main form which hosts some user control. The user control does some heavy data operation, such that if I directly call the Usercontrol_Load method the UI become nonresponsive for the duration for load method execution. To overcome this I load data on different thread (trying to change existing code as little as I can) I used a background worker thread which will be loading the data and when done will notify the application that it has done its work. Now came a real problem. All the UI (main form and its child usercontrols) was created on the primary main thread. In the LOAD method of the usercontrol I'm fetching data based on the values of some control (like textbox) on userControl. The pseudocode would look like this: //CODE 1 UserContrl1_LOadDataMethod() { if(textbox1.text=="MyName") <<======this gives exception { //Load data corresponding to "MyName". //Populate a globale variable List<string> which will be binded to grid at some later stage. } } The Exception it gave was Cross-thread operation not valid: Control accessed from a thread other than the thread it was created on. To know more about this I did some googling and a suggestion came up like using the following code //CODE 2 UserContrl1_LOadDataMethod() { if(InvokeRequired) // Line #1 { this.Invoke(new MethodInvoker(UserContrl1_LOadDataMethod)); return; } if(textbox1.text=="MyName") //<<======Now it wont give exception** { //Load data correspondin to "MyName" //Populate a globale variable List<string> which will be binded to grid at some later stage } } BUT BUT BUT... it seems I'm back to square one. The Application again become nonresponsive. It seems to be due to the execution of line #1 if condition. The loading task is again done by the parent thread and not the third that I spawned. I don't know whether I perceived this right or wrong. I'm new to threading. How do I resolve this and also what is the effect of execution of Line#1 if block? The situation is this: I want to load data into a global variable based on the value of a control. I don't want to change the value of a control from the child thread. I'm not going to do it ever from a child thread. So only accessing the value so that the corresponding data can be fetched from the database.

    Read the article

  • SharePoint The My Site of <user name> is scheduled for deletion.

    - by Linda
    Hi there, In my email today I got the following: The My Site of is scheduled for deletion. As their manager you are now the temporary owner of their site. This temporary ownership gives you access to the site to copy any business-related information you might need. To access the site use this URL: http://mysites.freud.com/personal/ I click on the link and I can see that there site is there. I do not want there site to be deleted at all what can I do. When I search for the user using the PeopleSearchBoxEx web part the user comes up but when I click on there name I get the error: Server Error in '/' Application. -------------------------------------------------------------------------------- User not found. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPException: User not found. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SPException: User not found.] Microsoft.SharePoint.Portal.WebControls.ProfilePropertyLoader.OnInit(EventArgs e) +4415 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Control.InitRecursive(Control namingContainer) +210 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378 Any idea how I can stop this person from being "delete" and get there profile to work again?

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • Team Foundation Server vs. SVN and other source control systems

    - by micha12
    We are currently looking for a version control system to use in our projects. Up to now we have been using VSS, but nowadays more powerful source control systems exists like TFS, SVN, etc. We are planning to migrate our projects to Visual Studio 2010, so the first idea coming to mind is to start using TFS 2010. I have never worked with SVN and other version control systems. My question is: how good is TFS compared to other source control systems? Is it a good idea using it, or should we rather use SVN (or any other system)? Thank you.

    Read the article

  • How do I make a child control re-anchor to its parent Form when it has been cut off on a small resol

    - by Paul Fedory
    I have a Windows Form with a default size of 1100 x 400, and I have a DataGridView control on it anchored to Top, Left, Bottom, Right. Resizing the form on a screen with resolution higher than 1100 x 400 works fine, and the anchoring works well, resizing the DataGridView control as expected. When I launch the form on a screen with resolution 800x600, the form is cut off, and made to fit the 800 x 600. The DataGridView is cut off, and cannot be seen entirely - it bleeds off the form to the right, so it's not respecting the right anchor. Resizing the form in this situation doesn't respect the anchoring settings for some reason: the DataGridView control does not resize when the form is resized. Is there a way programmatically (on a resize event or something) to force the child DataGridView control to anchor to the sides of the form? I've already tried calling a PerformLayout and Refresh in the Form's resize event but it's rather redundant, isn't it?

    Read the article

  • Any picture Gallery Control for .NET 1.1 in WinForms?

    - by Romias
    I need a UserControl to display pictures as a Gallery in Winforms. I have my pictures as a Image Collection but no problem to change to fit Control capabilities. It could be nice if it is for .NET 1.1 but since I'm planning to migrate all our code to 2.0 if the control is in that framework it could be useful in the future If it is free much better :) Does any of you know a control like this? Thanks!

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >