Search Results

Search found 16143 results on 646 pages for 'ms word 2003'.

Page 601/646 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Javascript memory leak/ performance issue?

    - by Tom
    I just cannot for the life of me figure out this memory leak in Internet Explorer. insertTags simple takes string str and places each word within start and end tags for HTML (usually anchor tags). transliterate is for arabic numbers, and replaces normal numbers 0-9 with a &#..n; XML identity for their arabic counterparts. fragment = document.createDocumentFragment(); for (i = 0, e = response.verses.length; i < e; i++) { fragment.appendChild((function(){ p = document.createElement('p'); p.setAttribute('lang', (response.unicode) ? 'ar' : 'en'); p.innerHTML = ((response.unicode) ? (response.surah + ':' + (i+1)).transliterate() : response.surah + ':' + (i+1)) + ' ' + insertTags(response.verses[i], '<a href="#" onclick="window.popup(this);return false;" class="match">', '</a>'); try { return p } finally { p = null; } })()); } params[0].appendChild( fragment ); fragment = null; I would love some links other than MSDN and about.com, because neither of them have sufficiently explained to me why my script leaks memory. I am sure this is the problem, because without it everything runs fast (but nothing displays). I've read that doing a lot of DOM manipulations can be dangerous, but the for loops a max of 286 times (# of verses in surah 2, the longest surah in the Qur'an).

    Read the article

  • Check a list of packages to install with apt-get

    - by Joel
    I am writing a post-install script for Ubuntu in Perl (same script as seen here). One of the steps is to install a list of packages. The problem is that if apt-get install fails in some of many different ways for any one of the packages the script dies badly. I would like to prevent that from happening. This happens because of the ways that apt-get install fails for packages that it doesn't like. For example when I try to install a nonsense word (i.e. typed in the wrong package name) $ sudo apt-get install oblihbyvl Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package oblihbyvl but if instead the package name has been obsoleted (installing handbrake from ppa) $ sudo apt-get install handbrake Reading package lists... Done Building dependency tree Reading state information... Done Package handbrake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'handbrake' has no installation candidate $ apt-cache search handbrake handbrake-cli - versatile DVD ripper and video transcoder - command line handbrake-gtk - versatile DVD ripper and video transcoder - GTK GUI I have tried parsing the results of apt-cache and apt-get -s install to try to catch all possibilities before doing the install, but I seem to keep finding new ways to allow failures to continue to the actual install system command. My question is, is there some facility either in Perl (e.g. a module, though I would like to avoid installing modules if possible as this is supposed to be the first thing run after a new install of Ubuntu) or apt-* or dpkg that would let me be sure that the packages are all available to be installed before installing and if not fail gracefully in some way that lets the user decide what to do?

    Read the article

  • Using git (or some other VCS) at your company

    - by supercheetah
    Some friends of mine and I were talking recently about version control, and how they were using VSS at their jobs, and were probably going to be moving off of that soon. One of them said that his company will likely be going with Team Foundation Server. Eventually, the conversation did get around to talking about some of the open source VCSes out there, including git and SVN. None of us really knew about any companies that use either of these internally, although we imagined that a number of them did so for SVN, but we weren't too sure about git. I brought up Google and Android using it, but my friend figured that's only for the public facing source code, and that they may use something different for internal projects. Apparently it's more than just SCM that makes TFS so intriguing: Microsoft Sales people and support (although my friend did point out somethings to his managers that he thought might be misleading on MS' part) Integration of things beyond SCM, including project management (I'm just finding out that there are geared towards the same things for git) Again, it's Microsoft, and the transition from VSS to TFS seems logical (or does it?) I'm not much of a fan of SVN, so I didn't really bring it up much, but I am curious about whether or not git is used at your company for internal projects. Have you thought about it, and decided against it? Any reason why?

    Read the article

  • NHibernate + Cannot insert the value NULL into...

    - by mybrokengnome
    I've got a MS-SQL database with a table created with this code CREATE TABLE [dbo].[portfoliomanager]( [idPortfolioManager] [int] NOT NULL PRIMARY KEY IDENTITY, [name] [varchar](45) NULL ) so that idPortfolioManager is my primary key and also auto-incrementing. Now on my Windows WPF application I'm using NHibernate to help with adding/updating/removing/etc. data from the database. Here is the class that should be connecting to the portfoliomanager table namespace PortfolioManager { [Class(Table="portfoliomanager",NameType=typeof(PortfolioManagerClass))] public class PortfolioManagerClass { [Id(Name = "idPortfolioManager")] [Generator(1, Class = "identity")] public virtual int idPortfolioManager { get; set; } [NHibernate.Mapping.Attributes.Property(Name = "name")] public virtual string name { get; set; } public PortfolioManagerClass() { } } } and some short code to try and insert something PortfolioManagerClass portfolio = new PortfolioManagerClass(); Portfolio.name = "Brad's Portfolios"; The problem is, when I try running this, I get this error: {System.Data.SqlClient.SqlException: Cannot insert the value NULL into column 'idPortfolioManager', table 'PortfolioManagementSystem.dbo.portfoliomanager'; column does not allow nulls. INSERT fails. The statement has been terminated... with an outer exception of {"could not insert: [PortfolioManager.PortfolioManagerClass][SQL: INSERT INTO portfoliomanager (name) VALUES (?); select SCOPE_IDENTITY()]"} I'm hoping this is the last error I'll have to solve with NHibernate just to get it to do something, it's been a long process. Just as a note, I've also tried setting Class="native" and unsaved-value="0" with the same error. Thanks! Edit: Ok removing the 1, from Generator actually allows the program to run (not sure why that was even in the samples I was looking at) but it actually doesn't get added to the database. I logged in to the server and ran the sql server profiler tool and I never see the connection coming through or the SQL its trying to run, but NHibernate isn't throwing an error anymore. Starting to think it would be easier to just write SQL statements myself :(

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • Javascript: click and dblclick deliver different parentNodes on same html. Why?

    - by user1658206
    currently I work on an very small wysiwyg editor based on jquery. I dont care about IE oder Chrome, just Firefox. My problem is to find if the selection is in a link for to get the value of the href attribute if is set. With click the node of the link is found, with double click always the body. It is in designMode. My event-handler for click and dblclick. The vars current_selection, current_node, iframe and container are global. selection_handler:function() { current_selection = iframe.getSelection(); current_node = current_selection.anchorNode; if(current_node.nodeName == "#text") { current_node = current_node.parentNode; } $('#log').text(current_node.nodeName); }, The log shows mit for example 'body', when I click in unformated text. When I add a link with execCommand('createLink',...) the log shows 'A'. That works. When I mark the linked word with 2 click from start to end, the log shows 'A'. But with double click I always get 'body'. So I can't get the href attribute. The handler is defined in the init: init:function(options) { ... iframe = $('#wysiwyg-'+container.attr('id'))[0].contentWindow; iframe.addEventListener('dblclick',methods.selection_handler,false); iframe.addEventListener('click',methods.selection_handler,false); ... } Has somebody an idea what is wrong?

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Copy SQL From Access To Delphi Script

    - by Libra
    I found a difficult with SQL on Delphi, I use ADOconnection and ADOQuery. Here these Query With ADOQuery Do Begin SQL.Text:='SELECT QUnionSAPiutang.kd_Customer, T_Customer.nama_customer, ' +'CDbl(IIf(IsNull(DSum("SA","QSumSAPiutang","kd_Customer='" & [QUnionSAPiutang].[kd_Customer] & "' AND ' +'Tgl<#1/1/2010# ")),0,DSum("SA","QSumSAPiutang","kd_Customer='" & [QUnionSAPiutang].[kd_Customer] & "' ' +'AND Tgl<#1/1/2010# "))) AS SA1, Sum(QUnionSAPiutang.D) AS Debit, Sum(QUnionSAPiutang.K) AS Kredit, ' +'[SA1]+[Debit]-[Kredit] AS SAkh ' +'FROM QUnionSAPiutang INNER JOIN T_Customer ON ' +'QUnionSAPiutang.kd_Customer = T_Customer.kd_customer ' +'WHERE (((QUnionSAPiutang.Tgl) Between #1/1/2010# And #1/31/2010#)) ' +'GROUP BY QUnionSAPiutang.kd_Customer, T_Customer.nama_customer'; End That Query Above has an error.... I try to fix, but still have an error. I hope you can help my problem, please fix that Query. I use Ms.Access XP for Database, if I run that Query on Accsess, the error is nothing. I use three object T_Customer, QUnionSAPiutang, and QSumSAPiutang. Where the red text is a part of QSumSAPiutang. coz QSumSAPiutang not directly Join with the others, it is call with DSum. Please help me, Thank you for your time. I hope reply from you soon....

    Read the article

  • Extremely Difficult Problem with ASP.Net 4.0 WebForms app using Routing

    - by dudeNumber4
    I have a completed app running in a QA environment. Everything works fine under most circumstances. If you hit a plain URL (no identifying information in the URL), you see an intro page with a button (generated by an asp LinkButton control) that posts back and directs you to another page. The markup looks the same when it fails and when it doesn't. When such a URL is followed from, e.g., Word and the default browser is IE, the intro page loads fine, but clicking the button causes an error. When not debugging, this behavior occurs every time. While debugging, the error occurs only ~ 1 in 10 times (closing the browser instance and starting over every time). When the error occurs, the intro page Page_Load fires and IsPostBack is false. Somehow, instead of a post, a get is being issued. When I run fiddler to try to analyze the actual calls (can't use firebug because it never happens using Firefox), everything works every time. I don't know whether this issue has anything to do with routing, and I've no idea even what to look at next. The strange thing is, when I debug, the intro page doesn't fully load every time. Only about 1 in 3 times does it fully load even if I've just cleared browser cache. When I run it through fiddler, it fully loads and works fine every time.

    Read the article

  • First Time Architecturing?

    - by cam
    I was recently given the task of rebuilding an existing RIA. The new RIA that I've designed is based on Silverlight, with a WCF service to connect to MS SQL Server. This is my first time doing something like this, so I'm not sure how to design the entire thing. Basically, the client can look through graphs of "stocks" (allowing the client to choose different time periods, settings, etc). I've written the whole application essentially, but I'm not sure how to put it together. The graphs are supposed to be directly based on the database, and to create the datapoints on the graph, some calculations need to be done (not very expensive ones). The problem I'm having is to decide where to put the calculations (client or serverside? Or half and half?) What factors should I look for to help me decide where the calculations should be done? And how can I go about optimizing this (caching, etc)? Obviously this is a very broad subject, so I'm not expecting an immediate answer, but any help/pointing in the right direction/resources would be appreciated.

    Read the article

  • Assigning values to the lable through database depending on listbox values

    - by SurajVitekar
    I want to assign the values of five labels from database regarding to values selected in listbox. The db query returns single column with multiple records. Please help. I'm working with C# 2010 and MS SQL. My current code is: private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { try { String c1, c2; c1 = "NULL"; MessageBox.Show("LB index :"+listBox1.SelectedIndex.ToString()); //p = listBox1.SelectedItem.ToString(); SqlConnection con = new SqlConnection(); con.ConnectionString = "Data Source=localhost;Initial Catalog=eVoting;Integrated Security=True;Pooling=False"; con.Open(); MessageBox.Show("List bOx sect :"+listBox1.SelectedValue.ToString()); SqlCommand cmd = new SqlCommand("select Firstname from candidates where position ='" + listBox1.SelectedValue.ToString() + "'", con); int index = 0; SqlDataReader reader = cmd.ExecuteReader(); while(reader.Read()) { if (index == 0) { c1 = reader[index].ToString(); radioButton1.Text = c1; } if (index == 1) { c1 = reader[index].ToString(); radioButton2.Text = c1; } if (index == 2) { c1 = reader[index].ToString(); radioButton3.Text = c1; } if (index == 3) { c1 = reader[index].ToString(); radioButton4.Text = c1; } if (index == 4) { c1 = reader[index].ToString(); radioButton4.Text = c1; } if (index == 5) { c1 = reader[index].ToString(); radioButton5.Text = c1; } MessageBox.Show("c1 :" + c1); index++; } } catch (Exception E) { } }

    Read the article

  • Repository Pattern Standardization of methods

    - by Nix
    All I am trying to find out the correct definition of the repository pattern. My original understanding was this (extremely dubmed down) Separate your Business Objects from your Data Objects Standardize access methods in data access layer. I have really seen 2 different implementations. Implementation 1 : public Interface IRepository<T>{ List<T> GetAll(); void Create(T p); void Update(T p); } public interface IProductRepository: IRepository<Product> { //Extension methods if needed List<Product> GetProductsByCustomerID(); } Implementation 2 : public interface IProductRepository { List<Product> GetAllProducts(); void CreateProduct(Product p); void UpdateProduct(Product p); List<Product> GetProductsByCustomerID(); } Notice the first is generic Get/Update/GetAll, etc, the second is more of what I would define "DAO" like. Both share an extraction from your data entities. Which I like, but i can do the same with a simple DAO. However the second piece standardize access operations I see value in, if you implement this enterprise wide people would easily know the set of access methods for your repository. Am I wrong to assume that the standardization of access to data is an integral piece of this pattern ? Rhino has a good article on implementation 1, and of course MS has a vague definition and an example of implementation 2 is here.

    Read the article

  • How do I redirect standard output to a file in Perl? [closed]

    - by rockyurock
    I want to send standard output to the file "my_output.txt" but failed. Here's the output: inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** When I enable the local *STDOUT; in below code then I could see the above output on command prompt display (ofcourse server is sending some data): my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

  • Computer Science taxonomy

    - by Bakhtiyor
    I am developing web application where users have collection of tags. I need to create a suggestion list for users based on the similarity of their tags. For example, when a user logs in to the system, system gets his tags and search these tags in the DB of users and showing users who have similar tags. For instance if User 1 has following tags [Linux, Apache, MySQL, PHP] and User 2 has [Windows, IIS, PHP, MySQL] it says that User 2 matchs User 1 with a weight of 50%, because he has 2 similar tags(PHP and MySQL). But imagine the situation where User 1 has [ASP, IIS, MS Access] and User 2 has [PHP, Apache, MySQL]. In this situation my system doesn't suggest User 2 as a "friend" to User 1 or vice versa. But we now that these two users has similarity on the the field of work, both works on Web Technology (or Web Programming, etc). So, that is why I need kind of taxonomy of computer science (right now, but probably I would need taxonomy of other fields also, like medicine, physics, mathematics, etc.) where these concepts are categorized and so that when I search for similarity of ASP and PHP, for example, it can say that they have similarity and belong into one group(or category). I hope I described my problem clearly, but if something wrong explained would be happy for your corrections. Thanks

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • When is a bool not a bool (compiler warning C4800)

    - by omatai
    Consider this being compiled in MS Visual Studio 2005 (and probably others): CPoint point1( 1, 2 ); CPoint point2( 3, 4 ); const bool point1And2Identical( point1 == point2 ); // C4800 warning const bool point1And2TheSame( ( point1 == point2 ) == TRUE ); // no warning What the...? Is the MSVC compiler brain-dead? As far as I can tell, TRUE is #defined as 1, without any type information. So by what magic is there any difference between these two lines? Surely the type of the expression inside the brackets is the same in both cases? [This part of the question now satisfactorily answered in the comments just below] Personally, I think that avoiding the warning by using the == TRUE option is ugly (though less ugly than the != 0 alternative, despite being more strictly correct), and it is better to use #pragma warning( disable:4800 ) to imply "my code is good, the compiler is an ass". Agree? Note - I have seen all manner of discussion on C4800 talking about assigning ints to bools, or casting a burger combo with large fries (hold the onions) to a bool, and wondering why there are strange results. I can't find a clear answer on what seems like a much simpler question... that might just shine line on C4800 in general.

    Read the article

  • Documentation style: how do you differentiate variable names from the rest of the text within a comm

    - by Alix
    Hi, This is a quite superfluous and uninteresting question, I'm afraid, but I always wonder about this. When you're commenting code with inline comments (as opposed to comments that will appear in the generated documentation) and the name of a variable appears in the comment, how do you differentiate it from normal text? E.g.: // Try to parse type. parsedType = tryParse(type); In the comment, "type" is the name of the variable. Do you mark it in any way to signify that it's a symbol and not just part of the comment's text? I've seen things like this: // Try to parse "type". // Try to parse 'type'. // Try to parse *type*. // Try to parse <type>. // Try to parse [type]. And also: // Try to parse variable type. (I don't think the last one is very helpful; it's a bit confusing; you could think "variable" is an adjective there) Do you have any preference? I find that I need to use some kind of marker; otherwise the comments are sometimes ambiguous, or at least force you to reread them when you realise a particular word in the comment was actually the name of a variable. (In comments that will appear in the documentation I use the appropriate tags for the generator, of course: @code, <code></code>, etc) Thanks!

    Read the article

  • why create CLSID_CaptureGraphBuilder2 instance always failed in a machine

    - by Yigang Wu
    It's a real strange issue, the machine information below is from DXDiag. There is no error reported, but create CLSID_CaptureGraphBuilder2 instance always failed in the machine. It's okay to create CLSID_FilterGraph. Before create CLSID_CaptureGraphBuilder2, I have called CoInitialize and created CLSID_FilterGraph. Only this machine has the error, what dll related with this interface or any function needed to call before to make it work? Thanks in advance. System Information Time of this report: 4/24/2010, 09:46:58 Machine name: TURION Operating System: Windows XP Home Edition (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_qfe.100216-1510) Language: Japanese (Regional Setting: Japanese) System Manufacturer: To Be Filled By O.E.M. System Model: MS-7145 BIOS: Default System BIOS Processor: AMD Turion(tm) 64 Mobile Technology MT-30, MMX, 3DNow, ~1.6GHz Memory: 768MB RAM Page File: 376MB used, 1401MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode DxDiag Notes DirectX Files Tab: No problems found. Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Music Tab: No problems found. Input Tab: No problems found. Network Tab: No problems found.

    Read the article

  • Where can I view rountrip information in my ASP.NET application?

    - by ajax81
    Hi All, I'm playing around with storing application settings in my database, but I think I may have created a situation where superfluous roundtrips are being made. Is there an easy way to view roundtrips made to an MS Access (I know, I know) backend? I guess while I'm here, I should ask for advice on the best way to handle this project. I'm building an app that generates links based on file names (files are numbered ints, 0-5000). The files are stored on network shares, arranged by name, and the paths change frequently as files are bulk transfered to create space, etc. Example: Files 1000 - 2000 go to /path/1000s Files 2001 - 3000 go to /path/2000s Files 3001 - 4000 go to /path/3000s etc I'm sure by now you can see where I'm going with this. Ultimately, I'm trying to avoid making a roundtrip to get the paths for every single file as they are displayed in a gridview. I'm open to the notion that I've gone about this all wrong and that my idea might be rubbish. I've toyed around with the notion of just creating a flat file, but if I do that, do I still run into the problem of having that file opened and closed for every file displayed in a gridview?

    Read the article

  • Cannot create a new VS data connection in Server Explorer

    - by Seventh Element
    I have a local instance of SQL Server 2008 express edition running on my development PC. I'm trying to create a new data connection through Visual Studio Server Explorer. The steps are the following: Right click the "Data Connections" node = Choose Data Source. I select "Microsoft SQL Server" as the data source. The "Add Connection" dialog window appears. I select my local server instance = "Test connection" works fine. I select "AdventureWorks" as the database name = "Test connection" works fine. Next I hit the "Ok" button = Error message: "This server version is not supported. Only servers up to MS SQL Server 2005 are supported." I'm using Visual Studio 2008 Professional Edition. The target framework of the application is ".NET framework 3.5". I have a reference to System.Data (framework v2.0) and cannot find another version of the assembly on my system. Am I referencing the wrong assembly? How can I fix this problem?

    Read the article

  • I need to sort the facets that come back from SOLR by relevancy

    - by Pinguthepenguin
    I have within my SOLR index song objects which belong to a higher level album object. An example is shown below: <song> <album title>Blood Sugar Sex Magic</album title> <song title>Under the Bridge</song title> <description>A sad song about junkies</description> </song> What I can do at the moment is create a facet on the album title so that a search on songs will also show me what albums contain hits for that keyword. The default behaviour for SOLR is that the facets are shown in the order of most hits to least. However what I want to achieve is the facet list to be sorted according to the relevancy of the top hit for that album. For example a search on the word "sad" may show a facet with one hit for "Blood Sugar Sex Magic" and there may also be an album called "Sad Clown songs" where there are 10 hits. "Sad clown songs" will show as the first facet even though it may be that "Under the bridge" comes up as the most relevant song. My question is how can I get all the facets back but then have them ordered by the relevancy of the songs within them? If I would need to change or extend some underlying SOLR code what would that be? Thanks in advance.

    Read the article

  • Order words by number of letters, then place words neatly

    - by bmaster
    I have a list of words in javascript similar to this: var words = ["mine", "minute", "mist", "mixed", "money", "monkey", "month", "moon", "morning", "mother", "motion", "mountain", "mouth", "move", "much", "muscle", "music", "nail", "name", "narrow", "nation", "natural", "near", "necessary", "neck", "need", "needle", "nerve", "net", "new", "news", "night"]; The words can be 1-25? letters long. I have a div id="words", with a set width of 700px (but I might change it from this). Using css/javascript/jquery, how can I make it: Order the words by number of letters Place the words inside the div tag, left to right, but so that there are no gaps at the right edge of the words div, and there is even spacing between words on a line. Each word should have a border around it and a background. Like this: |reallylongwordssdf shorterwordfdf dfsdfsdfsdf sdfsdfsdf| |sdfsdfsdf sdffsdop sdfjpogs sdfsds dfsdsd dfsdsd dfsdsd| I really have no idea where to begin with this. Perhaps I could manage to write code to order the words by number of letters, but after that, I'd be stuck. Edit: I forgot to add, the words must be links.

    Read the article

  • Is there a free, smale-scale, not web-based issue/bug tracking system?

    - by Doc Brown
    I know, there were posts before here on SO before concerning issue or bug tracking systems, like this one, but the given answers point either to commercial systems or web-based systems, which both seem to be oversized for our needs. What I am looking for is a non-commercial tool for a team of 3 to 4 developers, which can be used on an existing fileserver, without the need of installing additional server software like a C/S database or a web server. Some things I expect from such a system: allows to remember bugs (with a priority) and issues / ideas for new features (mostly without a priority) description of the issue, perhaps some additional remarks short info who entered the bug/issue entry one or more tags allowing us to group or filter the list Any suggestions? EDIT: I should have said that, but we are using MS Windows clients, Visual Studio development, Tortoise SVN (the latter works fine without a subversion server). And yes, I am strict on "no server software", since all server based solutions I have seen so far seem much to oversized/heavy weighted/too-much-effort-to-be-worth-it. In fact, if no one has a better idea, we are going to use a spreadsheet, but I can't believe there are no ready-made, light weight solutions.

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • VB app to web service

    - by brandon
    I know very little about web service but I assumed it would be the solution I was looking for. Basically I made an application in VB that I want to be ubiquitous for a lack of a better word. I need it to receive requests from multiple users and respond all at once. I was told "technically if you write a webservice you can provide as many results back to users as are connected." Maybe there is another solution for me that will give me the results I want. Here is an example of what I'm trying to do. Lets say I make an application in VB that does math. I now make a website. My website allows for a person to input 1 + 1 they click submit and my website then connects to my VB application running on my server listening for a request. It accepts the request from my website, and then it solves the math problem and returns the answer back to the website "1 + 1 = 2" That is only an example of the type of thing I need. My problem is that I can't have multiple people visiting my website all connecting to that same application running on my server so somehow I need the application to be where it can be accessed by multiple users. I was told a web service would be the answer but if there is another solution I'd like to know. If the only solution is a web service, then how can I manage to either convert the VB app to a web service? Can I have to convert the app to asp.net or some other language? Is there an easier option?

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >