Search Results

Search found 6510 results on 261 pages for 'umesha ms'.

Page 128/261 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • receiving data from serialport stops when main form is minimized or moved

    - by Rahul
    Sir, I apologize if this is already covered somewhere. I did a search and find something which I have already implemented. I am developing an application which receives data from a device connected via serialport. I am using SerialPort datareceived event to capture data. I am to display data in text boxes of the main form. Data is received frequently. I used Timer to send command to the device, in response device send some data. Timer interval is 100 ms. In every 100 ms interval some command is sent and correspondingly data is received. I used Invoke function to update the GUI elements like TExtbox, labels etc. Everything is fine. All the elements are updating beautifully. But during receiving data if I make any change in main form like moving form, minimizing, maximizing, or clicking somewhere in the form, then data receiving stops. I couldnot find the reason why its happening ? I also changed the timer interval to 200,300,400,500 but same problem is there. Please tell me why its happening? And possible solution... Thanks In advance.... :)

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • What language is this???

    - by Misha Koshelev
    Dear All: I thought this was Javascript... but parser is giving me trouble. Any ideas? Thank you! Misha for (;;); {"error":0,"errorSummary":"","errorDescription":"","errorIsWarning":false,"silentError":0,"payload":{"collections": [{"name":"bcm","type":"flp","filter":"flp_662923563701","value":"662923563701","editable":true,"deletable":true,"members": ["1319651388","539562714","710814793","569071038","1553739575","2413243"]}, {"name":"mstp","type":"flp","filter":"flp_715806870131","value":"715806870131","editable":true,"deletable":true,"members": ["1263807225","1159429816","508447486","508005223","1234906348","642723993","552875889","23401888","10701320","8302901","7931988","3007490","1286522890","1128447272","1076062553","775679867","737520202","640799498","224400055","224400048","14211567","7909445","3005965","2404364","218216","660037273","224400089","73306230","9603033","1111694","1034418884","775680513" ,"704526828","518753881","512477182","224400016","24904610","19000876","5403952","3005641","2100348","100000421128298","1445411167","691445174","1020020100","795471177","683724539","682441089","532450522","224400129","224400005","3006522","2246813","1302265","7197","1900494", "100000474978266","2533582","1205125","1384091677","1260996959","710814793","514951289","224400164","224400156","173601800","13304723","7938844","3004783","3001379","302817","716739950 ","706849","1418109424","562676898","82501644","3007569","13173"]}, {"name":"mystery","type":"flp","filter":"flp_687949656211","value":"687949656211","editable":true,"deletable":true,"members": ["100001286464748","508123007","100001161894460","1148567030","1048974191","769992391","831734347","15347","1297180076","756692945","3005266","733396195","34410910","100000940154241"," 748426280","569417581","1318922027","100000164920046","1475269609","1436536592","10000016210 8385","754095305","100000421128298","537833189","100000692471928","7920231","673753496","3006217","1221878","8365333","1128447272","224400133","218216","505457123","1421958541","183829 5926","2349408","1622810085","1201640391","510959992","23429895","542118016","1017385668","586225579","625100539","100000474886633","26404148","1384091677","224400156","806908635","843 187401","400435","768261176","7901808","748496482","1541469473","2511982","25401573","503715 506","1226000844","559195162","41400094","1099436201","409816","1584400985","1577092523","100000349351880","199301581"]},{"name":"SMS Subscriptions","type":"ms","filter":"ms","value":"3","editable":true,"deletable":false,"members":[]}],"current_view":false}}

    Read the article

  • Matlab set defaultTextInterpreter to LaTeX

    - by Maurits
    I am running Matlab R2010A on OS X 10.7.5 I have a simple matlab plot and would like to use LaTeX commands in the axis and legend. However setting: set(0, 'defaultTextInterpreter', 'latex'); Has zero effect, and results in a TeX warning that my tex commands can not be parsed. If I open plot tools of this plot, the default interpreter is set to 'TeX'. Manually setting this to 'LaTeX' obviously fixes this, but I can't do this for hundreds of plots. Now, if I retrieve the default interpreter via the Matlab prompt, i.e get(0,'DefaultTextInterpreter') It says 'LaTeX', but again, when I look in the properties of the figure via the plot tools menu, the interpreter remains set to 'TeX'. Complete plotting code: figure f = 'somefile.eps' set(0, 'defaultTextInterpreter', 'latex'); ms = 8; fontSize = 18; loglog(p_m_sip, p_fa_sip, 'ko-.', 'LineWidth', 2, 'MarkerSize', ms); hold on; xlabel('$P_{fa}$', 'fontsize', fontSize); ylabel('$P_{m}$', 'fontsize', fontSize); legend('$\textbf{K}_{zz}$', 'Location', 'Best'); set(gca, 'XMinorTick', 'on', 'YMinorTick', 'on', 'YGrid', 'on', 'XGrid', 'on'); print('-depsc2', f);

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • C# DynamicPDF Merging causing "Index out of bounds" error

    - by Dining Philanderer
    Greetings, We use DynamicPDF to merge multiple PDF documents stored in a MSSQL database. The vast majority of times it works wonderfully, but occasionally one of these documents will fail to merge generating the exception message "Index was outside the bounds of the array." I think I have isolated the problem to PDF files that are greater than 8.5 x 11.0. Does anyone know if this is a known issue with DynamicPDF? The merging code is posted here. What would be ideal is if there is a way to resize the PDF files to the correct size so this is not a concern at all... for (int docs = 0; docs < dsPDFInfo.Tables[0].Rows.Count; docs++) { byte[] bytePDFArray = (byte[])dsPDFInfo.Tables[0].Rows[docs]["Content"]; int iContentSize = Convert.ToInt32(dsPDFInfo.Tables[0].Rows[docs]["ContentSize"]); MemoryStream ms = new MemoryStream(bytePDFArray, 0, iContentSize); ceTe.DynamicPDF.Merger.PdfDocument pdfdoc = new ceTe.DynamicPDF.Merger.PdfDocument(ms); ceTe.DynamicPDF.Merger.MergeDocument mergedoc = new ceTe.DynamicPDF.Merger.MergeDocument(pdfdoc); docCombinedPDF.Append(mergedoc); } Thanks....

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • shell script filter du and find by a string inside a file in a subfolder

    - by Jason
    I have the following command that I run on cygwin: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | xargs du --max-depth=0 > foldersizesreport.csv I intended to do the following with this command: for each folder under /d/tmp/ that was modified in last 150 days, check its total size including files within it and report it to file foldersizesreport.csv however that is now not good enough for me, as it turns out inside each /d/tmp/subfolder1/somefile.properties /d/tmp/subfolder2/somefile.properties /d/tmp/subfolder3/somefile.properties /d/tmp/subfolder4/somefile.properties so as you see inside each subfolderX there is a file named somefile.properties inside it there is a property SOMEPROPKEY=3808612800100 (among other properties) this is the time in millisecond, i need to change the command so that instead of -mtime -150 it will include in the whole calculation only subfolderX that has a file inside them somefile.properties where the SOMEPROPKEY=3808612800100 is the time in millisecond in future, if the value SOMEPROPKEY=23948948 is in past then dont at all include the folder in the foldersizesreport.csv because its not relevant to me. so the result report should be looking like: /d/tmp/,subfolder1,<itssizein KB> /d/tmp/,subfolder2,<itssizein KB> and if subfolder3 had a SOMEPROPKEY=34243234 (time in ms in past) then it would not be in that csv file. so basically I'm looking for: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | <only subfolders that have in them property in file SOMEPROPKEY=28374874827 - time in ms in future and not in past | xargs du --max-depth=0 > foldersizesreport.csv

    Read the article

  • unhandled exception Handler in .Net 3.5 SP1

    - by Ruony
    Hi. I'm converting my own web browser use WPF from Windows XP to Windows 7. when I test on Windows XP, It has no error and exceptions. But I convert and test on Windows 7 with Multi-touch Library, My Browser occurred unhandled exception. Source: PresentationCore Message: An unspecified error occurred on the render thread. StackTrace: at System.Windows.Media.MediaContext.**NotifyPartitionIsZombie**(Int32 failureCode) at System.Windows.Media.MediaContext.NotifyChannelMessage() at System.Windows.Interop.HwndTarget.HandleMessage(Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Interop.HwndSource.HwndTargetFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) InnerException: null I want to know where the bug occurred. That Trace Message are garbage information for me. I already googling to know that message, but i never found any information. How do I get exactly function where the bug occurred? please tell me something.

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • Can't sent xml file from Web Application on Internet Explorer

    - by nCdy
    Error is something like that : Can't load Items.aspx from 192.168.0.172 And a text is Can't open this web-site. It can't be found. Try later code : HttpContext.Current.Response.Clear(); HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Charset = System.Text.Encoding.Unicode.EncodingName; HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.Unicode; HttpContext.Current.Response.BinaryWrite(System.Text.Encoding.Unicode.GetPreamble()); HttpContext.Current.Response.ContentType = "application/vnd.ms-excel"; HttpContext.Current.Response.AddHeader( "content-disposition", string.Format( "attachment; filename={0}",fileName)); .... table.RenderControl(htw); HttpContext.Current.Response.Write(sw.ToString()); HttpContext.Current.Response.End(); Trouble with this file is only for Internet Explorer (works on opera / firefox ... ) And so it works for HTML with no HttpContext.Current.Response.ContentType = "application/vnd.ms-excel"; this string

    Read the article

  • VB.net XML Parser loop

    - by StealthRT
    Hey all i am new to XML parsing on VB.net. This is the code i am using to parse an XML file i have: Dim output As StringBuilder = New StringBuilder() Dim xmlString As String = _ "<ip_list>" & _ "<ip>" & _ "<ip>192.168.1.1</ip>" & _ "<ping>9 ms</ping>" & _ "<hostname>N/A</hostname>" & _ "</ip>" & _ "<ip>" & _ "<ip>192.168.1.6</ip>" & _ "<ping>0 ms</ping>" & _ "<hostname>N/A</hostname>" & _ "</ip>" & _ "</ip_list>" Using reader As XmlReader = XmlReader.Create(New StringReader(xmlString)) Do Until reader.EOF reader.ReadStartElement("ip_list") reader.ReadStartElement("ip") reader.ReadStartElement("ip") reader.MoveToFirstAttribute() Dim theIP As String = reader.Value.ToString reader.ReadToFollowing("ping") Dim thePing As String = reader.ReadElementContentAsString().ToString reader.ReadToFollowing("hostname") Dim theHN As String = reader.ReadElementContentAsString().ToString MsgBox(theIP & " " & thePing & " " & theHN) Loop End Using I put the "do until reader.EOF" myself but it does not seem to work. It keeps giving an error after the first go around. I must be missing something? David

    Read the article

  • static variable lose its value

    - by user542719
    I have helper class with this static variable that is used for passing data between two classes. public class Helper{ public static String paramDriveMod;//this is the static variable in first calss } this variable is used in following second class mathod public void USB_HandleMessage(char []USB_RXBuffer){ int type=USB_RXBuffer[2]; MESSAGES ms=MESSAGES.values()[type]; switch(ms) { case READ_PARAMETER_VALUE: // read parameter values switch(prm){ case PARAMETER_DRIVE_MODE: // paramet drive mode Helper.paramDriveMod =(Integer.toString(((USB_RXBuffer[4]<< 8)&0xff00))); System.out.println(Helper.paramDriveMod+"drive mode is selectd ");//here it shows the value that I need. ..........}}//let say end switch and method and the following is an third class method use the above class method public void buttonSwitch(int value) throws InterruptedException{ boolean bool=true; int c=0; int delay=(int) Math.random(); while(bool){ int param=3; PARAMETERS prm=PARAMETERS.values()[param]; switch(value){ case 0: value=1; while(c<5){ Thread.sleep(delay); protocol.onSending(3,prm.PARAMETER_DRIVE_MODE.ordinal(),dataToRead,dataToRead.length);//read drive mode System.out.println(Helper.paramDriveMod+" drive mode is ..........in wile loop");//here it shows null value }}//let say end switch and method what is the reason that this variable lose its value?

    Read the article

  • How do I create a simple Windows form to access a SQL Server database?

    - by NoCatharsis
    I believe this is a very novice question, and if I'm using the wrong forum to ask, please advise. I have a basic understanding of databasing with MS SQL Server, and programming with C++ and C#. I'm trying to teach myself more by setting up my own database with MS SQL Server Express 2008 R2 and accessing it via Windows forms created in C# Express 2010. At this point, I just want to keep it to free or Express dev tools (not necessarily Microsoft though). Anyway, I created a database using the instructions provided here and I set the data types appropriately for each column (no errors in setup at least). Now I'm designing the GUI in C# Express but I've kind of hit a wall as far as the database connection. Is there a simple way to access the database I created locally using C# Express? Can anyone suggest a guide that has all this spelled out already? I am a self-learner so I look forward to teaching myself how to use these applications, but any pointers to start me off in the right direction would be greatly appreciated.

    Read the article

  • PHP Outputting File Attachments with Headers

    - by OneNerd
    After reading a few posts here I formulated this function which is sort of a mishmash of a bunch of others: function outputFile( $filePath, $fileName, $mimeType = '' ) { // Setup $mimeTypes = array( 'pdf' => 'application/pdf', 'txt' => 'text/plain', 'html' => 'text/html', 'exe' => 'application/octet-stream', 'zip' => 'application/zip', 'doc' => 'application/msword', 'xls' => 'application/vnd.ms-excel', 'ppt' => 'application/vnd.ms-powerpoint', 'gif' => 'image/gif', 'png' => 'image/png', 'jpeg' => 'image/jpg', 'jpg' => 'image/jpg', 'php' => 'text/plain' ); // Send Headers //-- next line fixed as per suggestion -- header('Content-Type: ' . $mimeTypes[$mimeType]); header('Content-Disposition: attachment; filename="' . $fileName . '"'); header('Content-Transfer-Encoding: binary'); header('Accept-Ranges: bytes'); header('Cache-Control: private'); header('Pragma: private'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); readfile($filePath); } I have a php page (file.php) which does something like this (lots of other code stripped out): // I run this thru a safe function not shown here $safe_filename = $_GET['filename']; outputFile ( "/the/file/path/{$safe_filename}", $safe_filename, substr($safe_filename, -3) ); Seems like it should work, and it almost does, but I am having the following issues: When its a text file, I am getting a strange symbol as the first letter in the text document When its a word doc, it is corrupt (presumably that same first bit or byte throwing things off). I presume all other file types will be corrupt - have not even tried them Any ideas on what I am doing wrong? Thanks - UPDATE: changed line of code as suggested - still same issue.

    Read the article

  • The Current State Of Serving a PHP 5.x App on the Apache, LightTPD & Nginx Web Servers?

    - by Gregory Kornblum
    Being stuck in a MS stack architecture/development position for the last year and a half has prevented me from staying on top of the world of open source stack based web servers recent evolution more than I would have liked to. However I am now building an open source stack based application/system architecture and sadly I do not have the time to give each of the above mentioned web servers a thorough test of my own to decide. So I figured I'd get input from the best development community site and more specifically the people who make it so. This is a site that is a resource for information regarding a specific domain and target audience with features to help users not only find the information but to also interact with one another in various ways for various reasons. I chose the open source stack for the wealth of resources it has along with much better offers than the MS stack (i.e. WordPress vs BlogEngine.NET). I feel Java is more in the middle of these stacks in this regard although I am not ruling out the possibility of using it in certain areas unrelated to the actual web app itself such as background processes. I have already come to the conclusion of using PHP (using CodeIgniter framework & APC), MySQL (InnoDB) and Memcached on CentOS. I am definitely serving static content on Nginx. However the 3 servers mentioned have no consensus on which is best for dynamic content in regards to performance. It seems LightTPD still has the leak issue which rules it out if it does, Nginx seems it is still not mature enough for this aspect and of course Apache tries to be everything for everybody. I am still going to compile the one chosen with as many performance tweaks as possible such as static linking and the likes. I believe I can get Apache to match the other 2 in regards to serving dynamic content through this process and not having it serve anything static. However during my research it seems the others are still worth considering. So with all things considered I would love to hear what everyone here has to say on the matter. Thanks!

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

  • Deleting While Iterating in Ruby?

    - by Jesse J
    I'm iterating over a very large set of strings, which iterates over a smaller set of strings. Due to the size, this method takes a while to do, so to speed it up, I'm trying to delete one of the strings from the smaller set that no longer needs to be used. Below is my current code: Ms::Fasta.foreach(@database) do |entry| all.each do |set| if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete(set) end end end end The problem I face is that the easy way, array.delete(string), effectively adds a break statement to the inner loop, which messes up the results. The only way I know how to fix this is to do this: Ms::Fasta.foreach(@database) do |entry| i = 0 while i < all.length set = all[i] if entry.header[1..40].include? set[1] + "|" startVal = entry.sequence.scan_i(set[0])[0] if startVal != nil @locations << [set[0], set[1], startVal, startVal + set[1].length] all.delete_at(i) i -= 1 end end i += 1 end end This feels kind of sloppy to me. Is there a better way to do this? Thanks.

    Read the article

  • Deployment of SQL Server: installing a second instance?

    - by Workshop Alex
    Simple problem. I'm working on a Delphi 2007/WIN32 application which now uses MS Access as simple data store. I have to modify it to support SQL Server Express, which is easy. These modifications are working so the application can be deployed using either SQL Server or MS Access. (Whatever the user prefers.) I did consider deploying the whole application together with the SQL Compact but this is not practicak. Using SQL Server Express 2008 instead of 2005 is an option, but also has a few nasty side-effects which we don't want to resolve for now. The problem is deploying the whole project. The installation with SQL Server would need a quiet installation so the user won't notice it. SQL Server is mentioned in the documentation so they know it's there. We just don't want to bother them with technical issues. In most cases, such an installation will go just fine. But what if the user already has an SQL Server (2005) installation which is used for something else? Personally, I would prefer to just install a second instance of SQL Server on their system so it won't conflict with the other installation. (Thus, if they uninstall the other app, the SQL instance will just stay installed.) While SQL Server 2005 and 2008 can be installed on the same system simply by using two different names for the instance, I wonder if it's also possible to install SQL Server 2005 twice on a single system to get two instances. And if possible, how?

    Read the article

  • Generated images fail to load in browser

    - by notJim
    I've got a page on a webapp that has about 13 images that are generated by my application, which is written in the Kohana PHP framework. The images are actually graphs. They are cached so they are only generated once, but the first time the user visits the page, and the images all have to be generated, about half of the images don't load in the browser. Once the page has been requested once and images are cached, they all load successfully. Doing some ad-hoc testing, if I load an individual image in the browser, it takes from 450-700 ms to load with an empty cache (I checked this using Google Chrome's resource tracking feature). For reference, it takes around 90-150 ms to load a cached image. Even if the image cache is empty, I have the data and some of the application's startup tasks cached, so that after the first request, none of that data needs to be fetched. My questions are: Why are the images failing to load? It seems like the browser just decides not to download the image after a certain point, rather than waiting for them all to finish loading. What can I do to get them to load the first time, with an empty cache? Obviously one option is to decrease the load times, and I could figure out how to do that by profiling the app, but are there other options? As I mentioned, the app is in the Kohana PHP framework, and it's running on Apache. As an aside, I've solved this problem for now by fetching the page as soon as the data is available (it comes from a batch process), so that the images are always cached by the time the user sees them. That feels like a kludgey solution to me, though, and I'm curious about what's actually going on.

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

  • Slightly different execution times between python2 and python3

    - by user557634
    Hi. Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4"). I was curious about the differences in execution time of it between python2 and python3. Here is my function: def perms(s): s = tuple(s) N = len(s) if N <= 1: yield s[:] raise StopIteration() for x in perms(s[1:]): for i in range(0,N): yield x[:i] + (s[0],) + x[i:] I tested both using timeit module. My tests: $ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py python2.6: args time[ms] 1 0.003811 2 0.008268 3 0.015907 4 0.042646 5 0.166755 6 0.908796 7 6.117996 8 48.346996 9 433.928967 10 4379.904032 python3: args time[ms] 1 0.00246778964996 2 0.00656183719635 3 0.01419159912 4 0.0406293644678 5 0.165960511097 6 0.923101452814 7 6.24257639835 8 53.0099868774 9 454.540967941 10 4585.83498001 As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better. As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2? Thank you in advance for kind answer :)

    Read the article

  • How to build a windows/web application that stores data locally and sync's with remote database?

    - by Jason
    Hello, I am needing to build an application that stores data locally and then synchronizes with a remote MS SQL database. I am not sure how to go about doing this. Enter data offline on a form and store the data. Synchronize the data with a remote MS SQL database when online. There will be many users who enter data offline, the local database on each pc needs to update when online and grab the 30 most recent records for use offline. Example: Each day users will enter data on their "form". The users will be offline. The users will return to the office and need to sync with the "online" database. The next morning the users will need to sync with the online database before going offline. They will need to have offline access to the 30 most recent records. (They will use the 30 records for charting/graphing while they are offline) I am very new to building apps. I have VS 2010. I am wondering where to start? What language to use? Is there a "framework" for doing this type of app? Any info or suggestions would be greatly appreciated. Thanks!

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

  • Image creation performance / image caching

    - by Kilnr
    Hello, I'm writing an application that has a scrollable image (used to display a map). The map background consists of several tiles (premade from a big JPG file), that I draw on a Graphics object. I also use a cache (Hashtable), to prevent from having to create every image when I need it. I don't keep everything in memory, because that would be too much. The problem is that when I'm scrolling through the map, and I need an image that wasn't cached, it takes about 60-80 ms to create it. Depending on screen resolution, tile size and scroll direction, this can occur multiple times in one scroll operation (for different tiles). In my case, it often happens that this needs to be done 4 times, which introduces a delay of more than 300 ms, which is extremely noticeable. The easiest thing for me would be that there's some way to speed up the creation of Images, but I guess that's just wishful thinking... Besides that, I suppose the most obvious thing to do is to load the tiles predictively (e.g. when scrolling to the right, precache the tiles to the right), but then I'm faced with the rather difficult task of thinking up a halfway decent algorithm for this. My actual question then is: how can I best do this predictive loading? Maybe I could offload the creation of images to a separate thread? Other things to consider? Thanks in advance.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >