Search Results

Search found 18677 results on 748 pages for 'current'.

Page 625/748 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • Is there unresizable space in latex? Pictures in good looking grid.

    - by drasto
    I've created latex macro to typeset guitar chords diagrams(using picture environment). Now I want to make diagrams of different appear in good looking grid when typeset one next to each other as the picture shows: The picture. (on the picture: Labeled "First" bad layout of diagrams, labeled "Second" correct layout when equal number of diagrams in line) I'm using \hspace to make some skips between diagrams, otherwise they would be too near to each other. As you can see in second case when latex arrange pictures in so that there is same number of them in each line it works. However if there is less pictures in the last line they become "shifted" to the right. I don't want this. I guess is because latex makes the space between diagrams in first line a little longer for the line to exactly fit the page width. How do I tell latex not to resize spaces created by \hspace ? Or is there any other way ? I guess I cannot use tables because I don't know how many diagrams will fit in one line... This is current state of code: \newcommand{\spaceForChord}{1.7cm} \newcommnad{\chordChart}[1]{% %calculate dimensions xdim and ydim according to setings \begin{picture}(xdim, ydim){% %draw the diagram inside defined area }% \hspace*{\spaceForChord}% \hspace*{-\xdim}% }% %end preambule and begin document \begin{document} First:\\* \\* \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} \chordChart{...some arguments for to change diagram look...} %...above line is repeated 12 more times to produce result shown at the picture \end{document} Thanks for any help.

    Read the article

  • MySQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, Specifications: MySQL 4.1+ I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. Appreciated advice in advance :)

    Read the article

  • Performance of inter-database query (between linked servers)

    - by Swoosh
    I have an import between 2 linked servers. I basically got to get the data from a multiple join into a table on my side. The current query is something like this: select a.* from db1.dbo.tbl1 a inner join db1.dbo.tbl2 on ... inner join db1.dbo.tbl3 on ... inner join db1.dbo.tbl4 on ... inner join db2.dbo.myside on ... db1 = linked server db2 = my own database After this one, I am using an insert into + select to add this data in my table which is located in db2. (usually few hundred records - this import running once a minute) My question is related to performance. The tables on the linked server (tbl1, tbl2, tbl3, tbl4) are huge tables, with millions of records, and it is slowing down the import process. I was told that, if I do the join on the "other" side (db1 - linked server) for example in a stored procedure, than, even if the query looks the same, it would run faster. Is that right? This is kinda hard to test. Note that the join contains a table from my database too. Also. are there other "tricks" I could use in order to make this run faster? Thanks

    Read the article

  • Use of extern in C++ dll

    - by dom_beau
    I declare then instantiate a static variable in a DLL. // DLL.h class A { //... }; static A* a; // DLL.cpp A* a = new A; So far, so good... I was suggested to use extern rather than static. extern A* a; // in DLL.h No problem with that but the extern variable must be declared somewhere. I got Invalid storage class member. In other words, what I was used to do is to declare a variable in a source file like this: // In src.cpp A a; then extern declare it in another source file in the same project: // In src2.cpp extern A a; so it is the same object a at link time. Maybe it is not the right thing to do? So, where to declare the variable that is now extern? Note that I used static declaration in order to see the variable instantiated as soon as the dll is loaded. Note that the current use of static works most of the time but I think I observe a delay or something like this in the variable instantiation while it should always be instantiated at load time. I'm investigating this problem for a week now and I can't find no solution.

    Read the article

  • RegEx expression or jQuery selector to NOT match "external" links in href

    - by TrueBlueAussie
    I have a jQuery plugin that overrides link behavior, to allow Ajax loading of page content. Simple enough with a delegated event like $(document).on('click','a', function(){});. but I only want it to apply to links that are not like these ones (Ajax loading is not applicable to them, so links like these need to behave normally): target="_blank" // New browser window href="#..." // Bookmark link (page is already loaded). href="afs://..." // AFS file access. href="cid://..." // Content identifiers for MIME body part. href="file://..." // Specifies the address of a file from the locally accessible drive. href="ftp://..." // Uses Internet File Transfer Protocol (FTP) to retrieve a file. href="http://..." // The most commonly used access method. href="https://..." // Provide some level of security of transmission href="mailto://..." // Opens an email program. href="mid://..." // The message identifier for email. href="news://..." // Usenet newsgroup. href="x-exec://..." // Executable program. href="http://AnythingNotHere.com" // External links Sample code: $(document).on('click', 'a:not([target="_blank"])', function(){ var $this = $(this); if ('some additional check of href'){ // Do ajax load and stop default behaviour return false; } // allow link to work normally }); Q: Is there a way to easily detect all "local links" that would only navigate within the current website? excluding all the variations mentioned above. Note: This is for an MVC 5 Razor website, so absolute site URLs are unlikely to occur.

    Read the article

  • Avoid using InetAddress - Getting a raw IP address in network byte order

    - by Mylo
    Hey, I am trying to use the MaxMind GeoLite Country database on the Google App Engine. However, I am having difficulty getting the Java API to work as it relies on the InetAddress class which is not available to use on the App Engine. However, I am not sure if there is a simple workaround as it appears it only uses the InetAddress class to determine the IP of a given hostname. In my case, the hostname is always an IP anyway. What I need is a way to convert an IP address represented as a String into a byte array of network byte order (which the addr.getAddress() method of the InetAddress class provides). This is the code the current API uses, I need to find a way of removing all references to InetAddress whilst ensuring it still works! Thanks for your time. /** * Returns the country the IP address is in. * * @param ipAddress String version of an IP address, i.e. "127.0.0.1" * @return the country the IP address is from. */ public Country getCountry(String ipAddress) { InetAddress addr; try { addr = InetAddress.getByName(ipAddress); } catch (UnknownHostException e) { return UNKNOWN_COUNTRY; } return getCountry(bytesToLong(addr.getAddress())); }

    Read the article

  • How can I manage building library projects that produce both a static lib and a dll?

    - by Scott Langham
    I've got a large visual studio solution with ~50 projects. There are configurations for StaticDebug, StaticRelease, Debug and Release. Some libraries are needed in both dll and static lib form. To get them, we rebuild the solution with a different configuration. The Configuration Manager window is used to setup which projects need to build in which flavours, static lib, dynamic dll or both. This can by quite tricky to manage and it's a bit annoying to have to build the solution multiple times and select the configurations in the right order. Static versions need building before non-static versions. I'm wondering, instead of this current scheme, might it be simpler to manage if, for the projects I needed to produce both a static lib and dynamc dll, I created two projects. Eg: CoreLib CoreDll I could either make both of these projects reference all the same files and build them twice, or I'm wondering, would it be possible to build CoreLib and then get CoreDll to link it to generate the dll? I guess my question is, do you have any advice on how to structure your projects in this kind of situation? Thanks.

    Read the article

  • Does Monitor.Wait ensure that fields are re-read?

    - by Marc Gravell
    It is generally accepted (I believe!) that a lock will force any values from fields to be reloaded (essentially acting as a memory-barrier or fence - my terminology in this area gets a bit loose, I'm afraid), with the consequence that fields that are only ever accessed inside a lock do not themselves need to be volatile. (If I'm wrong already, just say!) A good comment was raised here, questioning whether the same is true if code does a Wait() - i.e. once it has been Pulse()d, will it reload fields from memory, or could they be in a register (etc). Or more simply: does the field need to be volatile to ensure that the current value is obtained when resuming after a Wait()? Looking at reflector, Wait calls down into ObjWait, which is managed internalcall (the same as Enter). The scenario in question was: bool closing; public bool TryDequeue(out T value) { lock (queue) { // arbitrary lock-object (a private readonly ref-type) while (queue.Count == 0) { if (closing) { // <==== (2) access field here value = default(T); return false; } Monitor.Wait(queue); // <==== (1) waits here } ...blah do something with the head of the queue } } Obviously I could just make it volatile, or I could move this out so that I exit and re-enter the Monitor every time it gets pulsed, but I'm intrigued to know if either is necessary.

    Read the article

  • Loading GWT Messages from a Database

    - by Lars Tackmann
    In GWT one typically loads i18n strings using a interface like this: public interface StatusMessage extends Messages { String error(String username); : } which then loads the actual strings from a StatusMessage.property file: error=User: {0} does not have access to resource This is a great solution, however my client is unbendable in his demand for putting the i18n strings in a database so they can be changed at runtime (though its not a requirement that they be changed realtime). One solution is to create a async service which takes a message ID and user locale and returns a string. I have implemented this and find it terribly ugly (and it introduces a huge amount of extra communication with the server, plus it makes property placeholder replacement rather complicated). So my question is this, can I in some nice way implement a custom message provider that loads the messages from the backend in one big swoop (for the current user session). If it can also hook into the default GWT message mechanism, then I would be completely happy (i.e. so I can create a interface like above and keep using the the nice {0}, {1}... property replacement format). Other suggestions for clean database driven messages in GWT are also welcome.

    Read the article

  • Work with AJAX response with DOM methods

    - by Stomped
    I'm retrieving an entire HTML document via AJAX - and that works fine. But I need to extract certain parts of that document and do things with them. Using a framework (jquery, mootools, etc) is not an option. The only solution I can think of is to grab the body of the HTML document with a regex (yes, I know, terrible) ie. <body>(.*)</body> put that into the current page's DOM in a hidden element, and work with it from there. Is there an easier/better way? Update I've done some testing, and inserting an entire HTML document into a created element behaves a bit differently across browsers I've tested. For example: FF3.5: keeps the contents of the HEAD and BODY tags IE7 / Safari4: Only includes what's between ... Opera 10.10: Keeps HEAD and everything inside it, Keeps contents of BODY The behavior of IE7 and Safari are ideal, but different browsers are doing this differently. Since I'm loading a predetermined HTML document I think I'm going to use the regEx to grab what I want and insert it into a DOM element - unless someone has other suggestions.

    Read the article

  • Inline form editing on client side

    - by bykasif
    I see some web sites use dynamic forms(I am not sure about how to call them!) to edit a group of data. For example: there is a group of data such as name, last name, city, country.etc. when user clicks on EDIT button, instead of doing postback, a form, consisisting of 2 textboxes + 2 comboboxes, dynamically opens to edit,And then when you click on Save button, edit form disappears, and all data updates.. Now, I know what happens over here is using Ajax for server calls and some javascript for dom manipulation.. I even found some jquery plugins for textbox editing.. However, I could not found anything for full implementation of form fields. Therefore I have implemented it on asp.net by jquery ajax calls and dom manipulation manually. here is my process: 1) when Edit button clicked: Make a ajax call to server to retrieve necessary formedit.aspx 2) it returns editable form fields with values assigned. 3) when Save button clicked: make ajax call to server to retrieve formupdateprocess.aspx page. it basically do the database updates and then return necessary DOM snipplet (...) to insert current page.. well it works but MY PROBLEM, is performance.. Result seems slower than samples I see in other sites.:(( IS there anything that I dont know? a better way to implement this??

    Read the article

  • Break up PHP Pagination links

    - by Rabbott
    I have the following method that creates and returns markup for my pagination links in PHP. public function getPaginationLinks($options) { if($options['total_pages'] > 1) { $markup = '<div class="pagination">'; if($options['page'] > 1) { $markup .= '<a href="?page=' . ($options['page'] - 1) . ((isset($options['order_by'])) ? "&sort=" . $options['order_by'] : "") . '">< prev</a>'; } for($i = 1; $i <= $options['total_pages']; $i++) { if($options['page'] != $i) { $markup .= '<a href="?page='. $i . ((isset($options['order_by'])) ? "&sort=" . $options['order_by'] : "") . '">' . $i . '</a>'; } else { $markup .= '<span class="current">' . $i . '</span>'; } } if($options['page'] < $options['total_pages']) { $markup .= '<a href="?page=' . ($options['page'] + 1) . ((isset($options['order_by'])) ? "&sort=" . $options['order_by'] : "") . '">next ></a>'; } $markup .= '</div>'; return $markup; } else { return false; } } I just recently discovered (to my surprise) that i had reached 70+ pages which means that there are now 70+ links showing up at the bottom.. I'm wondering if someone can help me break this up.. I'm not sure how most pagination works as far as showing the numbers if im on say.. page 30, ideas?

    Read the article

  • Adding a Way To preserve A Comma In A CSV To DataTable Function

    - by Nick LaMarca
    I have a function that converts a .csv file to a datatable. One of the columns I am converting is is a field of names that have a comma in them i.e. "Doe, John" when converting the function treats this as 2 seperate fields because of the comma. I need the datatable to hold this as one field Doe, John in the datatable. Function CSV2DataTable(ByVal filename As String, ByVal sepChar As String) As DataTable Dim reader As System.IO.StreamReader Dim table As New DataTable Dim colAdded As Boolean = False Try ''# open a reader for the input file, and read line by line reader = New System.IO.StreamReader(filename) Do While reader.Peek() >= 0 ''# read a line and split it into tokens, divided by the specified ''# separators Dim tokens As String() = System.Text.RegularExpressions.Regex.Split _ (reader.ReadLine(), sepChar) ''# add the columns if this is the first line If Not colAdded Then For Each token As String In tokens table.Columns.Add(token) Next colAdded = True Else ''# create a new empty row Dim row As DataRow = table.NewRow() ''# fill the new row with the token extracted from the current ''# line For i As Integer = 0 To table.Columns.Count - 1 row(i) = tokens(i) Next ''# add the row to the DataTable table.Rows.Add(row) End If Loop Return table Finally If Not reader Is Nothing Then reader.Close() End Try End Function

    Read the article

  • Suitable data structures for saving files in localStorage (HTML5) ?

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject("files", files); // Note that setObject(key, data) does not exist but is added // using Storage.prototype.setObject = function() {... Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

  • Android: Find coordinates of a certain point X meters from my location moving towards the point I am

    - by Aidan
    Hi Guys, I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've done some Googling and checked the SDK but can't really find anything for such a thing. Does anyone know of a way? Example. Point A = Phones current location. Point B = A's orientation in relation to true north + 45 + max distance towards the direction your facing, Point C = A's orientation in relation to true north - 45 + max distance towards the direction your facing. So now you have a triangle constructed. pretty schweet huh? yeah.. I think so.. So now that I have my fancy Triangle I use something called Barycentric Coordinates ( http://en.wikipedia.org/wiki/Barycentric_coordinates_(mathematics) ). This will allow me to test another point and see if it is in the triangle. If it is, it means we're facing it AND it's within the right distance. So it should be displayed on screen. If I'm facing 90 degrees from true north. The distance it travels should be that direction. 90 degrees from true north. It should not be 100 degrees or something from true north! But the problem is I haven't yet figured out how I make the device realise it must go "out" the direction it is facing.

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

  • ruby on rails implement search with auto complete

    - by user429400
    I've implemented a search box that searches the "Illnesses" table and the "symptoms" table in my DB. Now I want to add auto-complete to the search box. I've created a new controller called "auto_complete_controller" which returns the auto complete data. I'm just not sure how to combine the search functionality and the auto complete functionality: I want the "index" action in my search controller to return the search results, and the "index" action in my auto_complete controller to return the auto_complete data. Please guide me how to fix my html syntax and what to write in the js.coffee file. I'm using rails 3.x with the jquery UI for auto-complete, I prefer a server side solution, and this is my current code: main_page/index.html.erb: <p> <b>Syptoms / Illnesses</b> <%= form_tag search_path, :method => 'get' do %> <p> <%= text_field_tag :search, params[:search] %> <br/> <%= submit_tag "Search", :name => nil %> </p> <% end %> </p> auto_complete_controller.rb: class AutoCompleteController < ApplicationController def index @results = Illness.order(:name).where("name like ?", "%#{params[:term]}%") + Symptom.order(:name).where("name like ?", "%#{params[:term]}%") render json: @results.map(&:name) end end search_controller.rb: class SearchController < ApplicationController def index @results = Illness.search(params[:search]) + Symptom.search(params[:search]) respond_to do |format| format.html # index.html.erb format.json { render json: @results } end end end Thanks, Li

    Read the article

  • Measure text size in JavaScript

    - by kayahr
    I want to measure the size of a text in JavaScript. So far this isn't so difficult because I can simply put a temporary invisible div into the DOM tree and check the offsetWidth and offsetHeight. The problem is, I want to do this BEFORE the DOM is ready. Here is a stub: <html> <head> <script type="text/javascript"> var text = "Hello world"; var fontFamily = "Arial"; var fontSize = 12; var size = measureText(text, fontSize, fontFamily); function measureText(text, fontSize, fontFamily) { // TODO Implement me! } </script> </head> <body> </body> </html> Again: I KNOW how to do it asynchronously when DOM (or body) signals that it is ready. But I want to do it synchronously right in the header as shown in the stub above. Any ideas how I can accomplish this? My current opinion is that this is impossible but maybe someone has a crazy idea.

    Read the article

  • Committing to a different branch with commit -r

    - by Amarghosh
    Does CVS allow committing a file to a different branch than the one it was checked out from? The man page and some sites suggest that we can do a cvs ci -r branch-1 file.c but it gives the following error: cvs commit: Up-to-date check failed for `file.c' cvs [commit aborted]: correct above errors first! I did a cvs diff -r branch-1 file.c to make sure that contents of file.c in my BASE and branch-1 are indeed the same. I know that we can manually check out using cvs co -r branch-1, merge the main branch to it (and fix any merge issues) and then do a check in. The problem is that there are a number of branches and I would like to automate things using a script. This thread seems to suggest that -r has been removed. Can someone confirm that? If ci -r is not supported, I am thinking of doing something like: Make sure the branch versions and base version are the same with a cvs diff Check in to the current branch Keep a copy of the file in a temp file For each branch: Check out from branch with -r replace the file with the temp file Check in (it'll go the branch as -r is sticky) Delete the temp file The replacing part sounds like cheating to me - can you think of any potential issues that might occur? Anything I should be careful about? Is there any other way to automate this process?

    Read the article

  • Migrating MachineKey from iis6 on old server to iis7 on new server

    - by MaseBase
    I am migrating our hosting environment to a totally new data center with new boxes and hardware and software... the whole deal. Our website cookies are encrypted using the machineKey, so when I make a request to my domain and point it to the new web server (by overriding the local hosts file), I get an error because the cookie cannot be decrypted, since the Machine Key is different. I'd like to avoid any problems a frequent user might have when they arrive at the new server for the first time. To the best of my knowledge, at this point I think I need to set the same MachineKey from our current servers on our new servers. This way when past visitors with a cookie arrive at our website served by the new server, the cookie will be decrypted properly with the MachineKey it was encrypted with and then log them in properly. My question is where do I find my MachineKey value (in IIS 6 win2k3 server) so I can use that value to set it statically on my new servers? I've pulled up my machine.config file, but it doesn't specify the key, it only specifies a configSection where the key can be defined. It's not in my web.config for the app or elsewhere. I did find this great article on some MachineKey and Web Garden woes (which could explain some other bugs I've been experiencing with regard to the machineKey). Update I am back to this issue and am still faced with a similar problem. I have the MachineKey auto-generated on the IIS6 server but I need to get that exact key so I can set it explicitly and not have it auto-generated anymore. Any help is appreciated...

    Read the article

  • Cross-browser method for getting width and height of a DIV?

    - by thinkthank
    This is my first post, so please go easy on me. I'm sure I'm doing everything wrong. However, I couldn't find any posts that answered the question above. I use jQuery. I'm trying to find a way to get the current width and height of a DIV element, even if they're set to "auto". I've found many ways to do this, but no method returns the same width in IE. It is important that this method is cross-browser, as it will break the layout of the page if different numbers are returned in different browsers. .width() and .height() do not work because in IE, padding is subtracted (e.g. width() returns 25 where width is 30 and padding is 5). .outerWidth() and .outerHeight() are not consistent either. While they work IE (believe it or not) in FF, the padding is added again to the full width (e.g. outerWidth() returns 110 in FF where width is 100px and padding is 10px). Is there any way out of this mess without writing complex browser checks? Thanks!

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • How can I change the language of dynamic text in a SWF using flashvars?

    - by Cormac
    I have a SWF with text embedded from an external .txt file. Is there a way I can have a different file used as the text source through the embedcode (swfObject) depending on the language? Here is my current actionscript: myData = new LoadVars(); myData.onLoad = function() { text_clips.project_title.text = this.projecttitle1; }; myData.load("translatetext.txt"); var loader:MovieClipLoader = new MovieClipLoader(); loader.loadClip(_level0.projectimage1,pic1.image_holder); This is the content of translatetext.txt: projecttitle1=This is my translatable text This is the embed code I'm using: <div> <object width="960" height="275" id="rvFlashcontent"> <param name="movie" value="lang_test_3.swf" /> <param name="wmode" value="transparent" /> <param name="flashvars" value="projectimage1=flashimages/testimage.jpg" /> <!--[if !IE]>--> <object type="application/x-shockwave-flash" data="lang_test_3.swf" width="960" height="275"> <param name="wmode" value="transparent" /> <param name="flashvars" value="projectimage1=flashimages/testimage.jpg" /> <!--<![endif]--> <h1>Alt Content</h1> <!--[if !IE]>--> </object> <!--<![endif]--> </object> </div> What I want to do is add a Flashvars parameter to name the file to load, so I can change the language: <param name="flashvars" value="projectimage1=flashimages/image.jpg&projecttext=textfrench.txt" /> There are four languages needed so far, but this will grow so it needs to be flexible enough to let the developers add languages without getting a new SWF each time. Thanks in advance all!

    Read the article

  • Deleting list items via ProcessBatchData()

    - by q-tuyen
    You can build a batch string to delete all of the items from a SharePoint list like this: 1: //create new StringBuilder 2: StringBuilder batchString= new StringBuilder(); 3: 4: //add the main text to the stringbuilder 5: batchString.Append(""); 6: 7: //add each item to the batch string and give it a command Delete 8: foreach (SPListItem item in itemCollection) 9: { 10: //create a new method section 11: batchString.Append(""); 12: //insert the listid to know where to delete from 13: batchString.Append("" + Convert.ToString(item.ParentList.ID) + ""); 14: //the item to delete 15: batchString.Append("" + Convert.ToString(item.ID) + ""); 16: //set the action you would like to preform 17: batchString.Append("Delete"); 18: //close the method section 19: batchString.Append(""); 20: } 21: 22: //close the batch section 23: batchString.Append(""); 24: 25: //preform the batch 26: SPContext.Current.Web.ProcessBatchData(batchString.ToString()); The only disadvantage that I can think of right know is that all the items you delete will be put in the recycle bin. How can I prevent that? I also found the solution like below: // web is a the SPWeb object your lists belong to // Before starting your deletion web.Site.WebApplication.RecycleBinEnabled = false; // When your are finished re-enable it web.Site.WebApplication.RecycleBinEnabled = true; Ref [Here](http://www.entwicklungsgedanken.de/2008/04/02/how-to-speed-up-the-deletion-of-large-amounts-of-list-items-within-sharepoint/) But the disadvantage of that solution is that only future deletion will not be sent to the Recycle Bins but it will delete all existing items as well which user do not want. Any idea to prevent not to delete existing items? Many thanks in advance, TQT

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >