Search Results

Search found 13889 results on 556 pages for 'results'.

Page 437/556 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • Recommendations, Asp.Net ObjectDataSource bind result of webservice call

    - by DerDres
    I need recommendations on how to solve / structure a solution to the following problem. In an asp.net web application i'll have to visualise results returned from a web service call. Im planning to use a repeater server control and bind this with an objectdatasource. The service returns a result of the form: public class SearchResult : IExtensibleDataObject { [DataMember] public Guid SearchId { get; set; } [DataMember] public DateTime Timestamp { get; set; } [DataMember] public int Offset { get; set; } [DataMember] public int TotalResults { get; set; } [DataMember] public IList<ResultDocumentData> Documents { get; set; } It is the collection of Documents that I need to visualise with a repeater which will be associated with an ObjectDataSource. The datasource should work on the type ResultDocumentData; the collection property in the class SearchResult. I think I need to wrap the call to the webservice in a dataaccess layer class, that will have a getDocuments method returning an IList, that the ObjectDataSource can use as its SelectMethod. I think I could make this work, but I would like to know how to do it in an elegant way. Can you give me general recommendations and/or recommendations for the following: WebProject Folderstructure + naming Naming conventions for the name of the service wrapper class Creating the service reference

    Read the article

  • Understanding Hibernate saveOrUpdate

    - by Stephano
    The books that I've read regarding hibernate are, at best, reference tomes. They very seldom have good code examples, so I tend to use online resources for those needs. However, I've always had a problem understanding the basic idea of hibernate persistence. I've read the books and understand the concepts, but in practice, I often see results that I don't understand. Perhaps you all can help, as you have in the past. Let's look at a simple example of a dog and a cat that are friends. This isn't a rare occurrence. It also has the benefit of being much more interesting than my business case. We want a function called "saveFriends" that takes a dog name and a cat name. We'll save the Dog and then the Cat. For this example to work, the cat is going to have a reference back to the dog. I understand this isn't an ideal example, but it's cute and works for our purposes. FriendService.java public int saveFriends(String dogName, String catName) { Dog fido = new Dog(); Cat felix = new Cat(); fido.name = dogName; fido = animalDao.saveDog(fido); felix.name = catName; [ex.A]felix.friend = fido; [ex.B]felix.friend = animalDao.getDogByName(dogName); animalDao.saveCat(felix); } AnimalDao.java (extends HibernateDaoSupport) public Dog saveDog(Dog dog) { getHibernateTemplate().saveOrUpdate(dog); return dog } public Cat saveCat(Cat cat) { getHibernateTemplate().saveOrUpdate(cat); return cat; } public Dog getDogByName(String name) { return (Dog) getHibernateTemplate().find("from Dog where name=?", name).get(0); } Now, assume for a minute that I would like to use either example A or example B to save my friend. Is one better than the other to use? I'll understand if neither of those examples work, but please explain why.

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Measuring execution time of selected loops

    - by user95281
    I want to measure the running times of selected loops in a C program so as to see what percentage of the total time for executing the program (on linux) is spent in these loops. I should be able to specify the loops for which the performance should be measured. I have tried out several tools (vtune, hpctoolkit, oprofile) in the last few days and none of them seem to do this. They all find the performance bottlenecks and just show the time for those. Thats because these tools only store the time taken that is above a threshold (~1ms). So if one loop takes lesser time than that then its execution time won't be reported. The basic block counting feature of gprof depends on a feature in older compilers thats not supported now. I could manually write a simple timer using gettimeofday or something like that but for some cases it won't give accurate results. For ex: for (i = 0; i < 1000; ++i) { for (j = 0; j < N; ++j) { //do some work here } } Now here I want to measure the total time spent in the inner loop and I will have to put a call to gettimeofday inside the first loop. So gettimeofday itself will get called a 1000 times which introduces its own overhead and the result will be inaccurate.

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • Is NSPasteboard thread-safe?

    - by Joe
    Is it safe to write data to an NSPasteboard object from a background thread? I can't seem to find a definitive answer anywhere. I think the assumption is that the data will be written to the pasteboard before the drag begins. Background: I have an application that is fetching data from Evernote. When the application first loads, it gets the meta data for each note, but not the note content. The note stubs are then listed in an outline view. When the user starts to drag a note, the notes are passed to the background thread that handles getting the note content from Evernote. Having the main thread block until the data is gotten results in a significant delay and a poor user experience, so I have the [outlineView:writeItems:toPasteboard:] function return YES while the background thread processes the data and invokes the main thread to write the data to the pasteboard object. If the note content gets transferred before the user drops the note somewhere, everything works perfectly. If the user drops the note somewhere before the data has been processed... well, everything blocks forever. Is it safe to just have the background thread write the data to the pasteboard?

    Read the article

  • Lucene and Special Characters

    - by Brandon
    I am using Lucene.Net 2.0 to index some fields from a database table. One of the fields is a 'Name' field which allows special characters. When I perform a search, it does not find my document that contains a term with special characters. I index my field as such: Directory DALDirectory = FSDirectory.GetDirectory(@"C:\Indexes\Name", false); Analyzer analyzer = new StandardAnalyzer(); IndexWriter indexWriter = new IndexWriter(DALDirectory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); doc.Add(new Field("Name", "Test (Test)", Field.Store.YES, Field.Index.TOKENIZED)); indexWriter.AddDocument(doc); indexWriter.Optimize(); indexWriter.Close(); And I search doing the following: value = value.Trim().ToLower(); value = QueryParser.Escape(value); Query searchQuery = new TermQuery(new Term(field, value)); Searcher searcher = new IndexSearcher(DALDirectory); TopDocCollector collector = new TopDocCollector(searcher.MaxDoc()); searcher.Search(searchQuery, collector); ScoreDoc[] hits = collector.TopDocs().scoreDocs; If I perform a search for field as 'Name' and value as 'Test', it finds the document. If I perform the same search as 'Name' and value as 'Test (Test)', then it does not find the document. Even more strange, if I remove the QueryParser.Escape line do a search for a GUID (which, of course, contains hyphens) it finds documents where the GUID value matches, but performing the same search with the value as 'Test (Test)' still yields no results. I am unsure what I am doing wrong. I am using the QueryParser.Escape method to escape the special characters and am storing the field and searching by the Lucene.Net's examples. Any thoughts?

    Read the article

  • Remove undesired indexed keywords from Sql Server FTS Index

    - by Scott
    Could anyone tell me if SQL Server 2008 has a way to prevent keywords from being indexed that aren't really relevant to the types of searches that will be performed? For example, we have the IFilters for PDF and Word hooked in and our documents are being indexed properly as far as I can tell. These documents, however, have lots of numeric values in them that people won't really be searching for or bring back meaningful results. These are still being indexed and creating lots of entries in the full text catalog. Basically we are trying to optimize our search engine in any way we can and assumed all these unnecessary entries couldn't be helping performance. I want my catalog to consist of alphabetic keywords only. The current iFilters work better than I would be able to write in the time I have but it just has more than I need. This is an example of some of the terms from sys.dm_fts_index_keywords_by_document that I want out: $1,000, $100, $250, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 129, 13.1, 14, 14.12, 145, 15, 16.2, 16.4, 18, 18.1, 18.2, 18.3, 18.4, 18.5 These are some examples from the same management view that I think are desirable for keeping and searching on: above, accordingly, accounts, add, addition, additional, additive Any help would be greatly appreciated!

    Read the article

  • Sendkeys problem from .NET program

    - by user203123
    THe code below I copied from MSDN with a bit of modification: [DllImport("user32.dll", CharSet = CharSet.Unicode)] public static extern IntPtr FindWindow(string lpClassName,string lpWindowName); DllImport("User32")] public static extern bool SetForegroundWindow(IntPtr hWnd); int cnt = 0; private void button1_Click(object sender, EventArgs e) { IntPtr calculatorHandle = FindWindow("Notepad", "Untitled - Notepad"); if (calculatorHandle == IntPtr.Zero) { MessageBox.Show("Calculator is not running."); return; } SetForegroundWindow(calculatorHandle); SendKeys.SendWait(cnt.ToString()); SendKeys.SendWait("{ENTER}"); cnt++; SendKeys.Flush(); System.Threading.Thread.Sleep(1000); } The problem is the number sequence in Notepad is not continuously. The first click always results 0 (as expected). but from the second click, the result is unpredictable (but the sequence is still in order, e.g. 3, 4, 5, 10, 14, 15, ....) If I click the button fast enough, I was able to get the result in continuous order (0,1,2,3,4,....) but sometimes it produces more than 2 same numbers (e.g. 0,1,2,3,3,3,4,5,6,6,6,7,8,9,...)

    Read the article

  • Table valued function only returns CLR error

    - by Anthony
    I have read-only access to a database that was set up for a third-party, closed-source app. Once group of (hopefully) useful table functions only returns the error: Failed to initialize the Common Language Runtime (CLR) v2.0.50727 with HRESULT 0x80131522. You need to restart SQL server to use CLR integration features. (severity 16) But in theory, the third-party app should be able to use the function (either directly or indirectly), so I'm convinced I'm not setting things up right. I'm very new to SQL Server, so I could be missing something obvious. Or I could be missing something really slight, I have no idea. Here is an example of a query that returns the above error: SELECT * FROM dbo.UncompressDataDateRange(4,'Apr 24 2010 12:00AM','Apr 30 2010 12:00AM') Where the function takes three parameters: The Data Set (int) -- basically the data has 6 classifications, and the giant table this should be pulling from has a column to indicate which is which. startDate (smalldatetime) endDate (smalldatetime) There are other, similar functions that expand on the same idea, all returning the same error. Quick Note: I'm not sure if this is relevant, but I was able to connect to the database via SQL Studio (but without the privs to script the functions as code), and a checked the dependency for the above sample function. It turns out that it is a dependent of a view that I have gotten to work, and that view is dependent of the larger, much-hairier data-table. This makes me think I should somehow be pointing the function at the results of the view, but I'm not seeing any documentation that shows how that is done.

    Read the article

  • What are some programming techniques for converting SD images to HD images

    - by Dr Dork
    I'm taking programming class and instructor loves to work with images so most of our assignments involve manipulating raw RGB image data. One of our assignments is to implement a standard image converter that converts SD images to HD images and vice versa. I always take advantage of these types of assignments to go a little beyond what we were asked to do, so I added a basic anti-aliasing process that uses the average pixel color of the 3x3 surrounding pixels to improve the converted image. While it helps a bit, the resulting image still doesn't look good, which is ok because it's not expected to for the assignment. I've learned that converting an SD to HD images has shown to be much harder than down sampling to SD from HD just because SD to HD effectively involves increasing resolution when it is not there. Obviously, it is hard to create pixels from nothing, but I'd like enhance my anti-aliasing to something that provides better results when upscaling an image. Most of the techniques I find and read on the internet are far beyond my level of image processing and programming. Can anybody suggest any better methods or processes to create good HD content from SD content that may be within my programming skill level? I know that's a difficult thing to gauge since you don't know me, but perhaps knowing that I can write c++ code to read in raw RGB data and upscale/downscale it with simple average-anti-aliasing will give you an idea. Thanks in advance for all your help!

    Read the article

  • Delphi Shell IExtractIcon usage and result

    - by Roy M Klever
    What I do: Try to extract thumbnail using IExtractImage if that fail I try to extract icons using IExtractIcon, to get maximum iconsize, but IExtractIcon gives strange results. Problem is I tried to use a methode that extracts icons from an imagelist but if there is no large icon (256x256) it will render the smaller icon at the topleft position of the icon and that does not look good. That is why I am trying to use the IExtractIcon instead. But icons that show up as 256x256 icons in my imagelist extraction methode reports icon sizes as 33 large and 16 small. So how do I check if a large (256x256) icon exists? If you need more info I can provide som sample code. if PThumb.Image = nil then begin OleCheck(ShellFolder.ParseDisplayName(0, nil, StringToOleStr(PThumb.Name), Eaten, PIDL, Atribute)); ShellFolder.GetUIObjectOf(0, 1, PIDL, IExtractIcon, nil, XtractIcon); CoTaskMemFree(PIDL); bool:= False; if Assigned(XtractIcon) then begin GetLocationRes := XtractIcon.GetIconLocation(GIL_FORSHELL, @Buf, sizeof(Buf), IIdx, IFlags); if (GetLocationRes = NOERROR) or (GetLocationRes = E_PENDING) then begin Bmp := TBitmap.Create; try OleCheck(XtractIcon.Extract(@Buf, IIdx, LIcon, SIcon, 32 + (16 shl 16))); Done:= False; Roy M Klever

    Read the article

  • How can I reject a Windows "Service Stop" request in ATL 7?

    - by Matt Dillard
    I have a Windows service built upon ATL 7's CAtlServiceModuleT class. This service serves up COM objects that are used by various applications on the system, and these other applications naturally start getting errors if the service is stopped while they are still running. I know that ATL DLLs solve this problem by returning S_OK in DllCanUnloadNow() if CComModule's GetLockCount() returns 0. That is, it checks to make sure no one is currently using any COM objects served up by the DLL. I want equivalent functionality in the service. Here is what I've done in my override of CAtlServiceModuleT::OnStop(): void CMyServiceModule::OnStop() { if( GetLockCount() != 0 ) { return; } BaseClass::OnStop(); } Now, when the user attempts to Stop the service from the Services panel, they are presented with an error message: Windows could not stop the XYZ service on Local Computer. The service did not return an error. This could be an internal Windows error or an internal service error. If the problem persists, contact your system administrator. The Stop request is indeed refused, but it appears to put the service in a bad state. A second Stop request results in this error message: Windows could not stop the XYZ service on Local Computer. Error 1061: The service cannot accept control messages at this time. Interestingly, the service does actually stop this time (although I'd rather it not, since there are still outstanding COM references). I have two questions: Is it considered bad practice for a service to refuse to stop when asked? Is there a polite way to signify that the Stop request is being refused; one that doesn't put the Service into a bad state?

    Read the article

  • JQuery printfunction: no content, no view

    - by Joris
    I got a JQuery printfunction: function doPrintPage() { myWindow = window.open('', '', 'titlebar=yes,menubar=yes,status=yes,scrollbars=yes,width=800,height=600'); myWindow.document.open(); myWindow.document.write(document.getElementById("studentnummer").value); myWindow.document.write(document.getElementById("Voornaam").value + " " + document.getElementById("Tussenvoegsel").value + " " + document.getElementById("Achternaam").value); myWindow.document.write("</br>"); myWindow.document.write("Result"); myWindow.document.write(document.getElementById('tab1').innerHTML); myWindow.document.write("</br>"); myWindow.document.write("Tree"); myWindow.document.write(document.getElementById('tab2').innerHTML); myWindow.document.write("</br>"); myWindow.document.write("Open"); myWindow.document.write(document.getElementById('tab3').innerHTML); myWindow.document.write("</br>"); myWindow.document.write("Free"); myWindow.document.write(document.getElementById('tab4').innerHTML); myWindow.document.close(); myWindow.focus(); } There are 3 gridviews (elements in "tab#"), and on the page that is generated by this script, eacht gridview gets a header (Results, Tree, Open, Free). It is possible that there is no gridview for "Tree". It would be nice if the "tree" header wouldn't be shown either. EDIT: I tried this: if ($('tab4')[0]) {myWindow.document.write("Free"); myWindow.document.write(document.getElementById('tab4').innerHTML); but it seems not to work either... Anyone knows the solution?

    Read the article

  • Go: using a pointer to array

    - by Sean
    I'm having a little play with google's Go language, and I've run into something which is fairly basic in C but doesn't seem to be covered in the documentation I've seen so far When I pass a pointer to an array to a function, I presumed we'd have some way to access it as follows: func conv(x []int, xlen int, h []int, hlen int, y *[]int) for i := 0; i<xlen; i++ { for j := 0; j<hlen; j++ { *y[i+j] += x[i]*h[j] } } } But the Go compiler doesn't like this: sean@spray:~/dev$ 8g broke.go broke.go:8: invalid operation: y[i + j] (index of type *[]int) Fair enough - it was just a guess. I have got a fairly straightforward workaround: func conv(x []int, xlen int, h []int, hlen int, y_ *[]int) { y := *y_ for i := 0; i<xlen; i++ { for j := 0; j<hlen; j++ { y[i+j] += x[i]*h[j] } } } But surely there's a better way. The annoying thing is that googling for info on Go isn't very useful as all sorts of C\C++\unrelated results appear for most search terms.

    Read the article

  • Problems with GData Request Token

    - by Dan Delgado
    We have successfully used GData libraries to access a user's Google Docs. But we encountered problems when many users log in to our site and authorize our web app at the same time or successively. Here's what happens: First user successful logs in, authorizes our web app via OAuth and is able to add rubric (or google spreadsheet). Second user, immediately after first user adds a rubric, successfully logs in then webapp fails on authorize (Token not given. I tried to log it.) Third user fails on login. Fourth user was able to log in, authorize via OAuth, and create rubrics successfully. Fifth user was able to log in but like the second user, gets an invalid token on authorize (Token not given.) And the list goes on. Results were unpredicatable. Below is an excerpt of the stack trace we get when the fail scenario happens: Nested in org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.NullPointerException: java.lang.NullPointerException at com.google.gdata.client.authn.oauth.OAuthUtil.normalizeParameters(OAuthUtil.java:158) at com.google.gdata.client.authn.oauth.OAuthUtil.getSignatureBaseString(OAuthUtil.java:81) at com.google.gdata.client.authn.oauth.OAuthHelper.addCommonRequestParameters(OAuthHelper.java:649) at com.google.gdata.client.authn.oauth.OAuthHelper.getOAuthUrl(OAuthHelper.java:592) at com.google.gdata.client.authn.oauth.OAuthHelper.getUnauthorizedRequestToken(OAuthHelper.java:276) at com.projectrix.controller.OAuthController.authorize(OAuthController.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Method.java:40) Help!

    Read the article

  • Calling QAxWidget method outside of the GUI thread

    - by user304361
    I'm beginning to wonder if this is impossible, but I thought I'd ask in case there's a clever way to get around the problems I'm having. I have a Qt application that uses an ActiveX control. The control is held by a QAxWidget, and the QAxWidget itself is contained within another QWidget (I needed to add additional signals/slots to the widget, and I couldn't just subclass QAxWidget because the class doesn't permit that). When I need to interact with the ActiveX control, I call a method of the QWidget, which in turn calls the dynamicCall method of the QAxWidget in order to invoke the appropriate method of the ActiveX control. All of that is working fine. However, one method of the ActiveX control takes several seconds to return. When I call this method, my entire GUI locks up for a few seconds until the method returns. This is undesirable. I'd like the ActiveX control to go off and do its processing by itself and come back to me when it's done without locking up the Qt GUI. I've tried a few things without success: Creating a new QThread and calling QAxWidget::dynamicCall from the new thread Connecting a signal to the appropriate slot method of the QAxWidget and calling the method using signals/slots instead of using dynamicCall Calling QAxWidget::dynamicCall using QtConcurrent::run Nothing seems to affect the behavior. No matter how or where I use dynamicCall (or trigger the appropriate slot of the QAxWidget), the GUI locks until the ActiveX control completes its operation. Is there any way to detach this ActiveX processing from the Qt GUI thread so that the GUI doesn't lock up while the ActiveX control is running a method? Is there something clever I can do with QAxBase or QAxObject to get my desired results?

    Read the article

  • What is the difference between using MD5.Create and MD5CryptoServiceProvider?

    - by byte
    In the .NET framework there are a couple of ways to calculate an MD5 hash it seems, however there is something I don't understand; What is the distinction between the following? What sets them apart from eachother? They seem to product identical results: public static string GetMD5Hash(string str) { MD5CryptoServiceProvider md5 = new MD5CryptoServiceProvider(); byte[] bytes = ASCIIEncoding.Default.GetBytes(str); byte[] encoded = md5.ComputeHash(bytes); StringBuilder sb = new StringBuilder(); for (int i = 0; i < encoded.Length; i++) sb.Append(encoded[i].ToString("x2")); return sb.ToString(); } public static string GetMD5Hash2(string str) { System.Security.Cryptography.MD5 md5 = System.Security.Cryptography.MD5.Create(); byte[] bytes = Encoding.Default.GetBytes(str); byte[] encoded = md5.ComputeHash(bytes); StringBuilder sb = new StringBuilder(); for (int i = 0; i < encoded.Length; i++) sb.Append(encoded[i].ToString("x2")); return sb.ToString(); }

    Read the article

  • python eval weirdness

    - by amadain
    Hi Folks I have the following code in one of my classes along with checks when the code does not eval: filterParam="self.recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and self.recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]" if eval(filterParam): print "Evalled" else: print "Not Evalled\nfilterParam\n'%s'\ntmpBPSS\n'%s'\nself.recipientMSISDN\n'%s'\nself.recipientIMSI\n'%s'" % (filterParam, tmpBPSS, self.recipientMSISDN, self.recipientIMSI) I am not getting anything to 'eval'. Here are the results: Not Evalled filterParam 'self.recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and self.recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]' tmpBPSS 'bprm_DAILY_MO_919844000039#892000000' self.recipientMSISDN '919844000039' self.recipientIMSI '892000000' So I used the outputs from the above to check the code in a python shell and as you can see the code evalled correctly: >>> filterParam="recipientMSISDN==tmpBPSS.split('_')[3].split('#')[0] and recipientIMSI==tmpBPSS.split('_')[3].split('#')[1]" >>> tmpBPSS='bprm_DAILY_MO_919844000039#892000000' >>> recipientMSISDN='919844000039' >>> recipientIMSI='892000000' >>> if eval(filterParam): ... print "Evalled" ... else: ... print "Not Evalled" ... Evalled Am I off my rocker or what am I missing? A

    Read the article

  • Which is better Java programming practice for looping up to an int value: a converted for-each loop

    - by Arvanem
    Hi folks, Given the need to loop up to an arbitrary int value, is it better programming practice to convert the value into an array and for-each the array, or just use a traditional for loop? FYI, I am calculating the number of 5 and 6 results ("hits") in multiple throws of 6-sided dice. My arbitrary int value is the dicePool which represents the number of multiple throws. As I understand it, there are two options: Convert the dicePool into an array and for-each the array: public int calcHits(int dicePool) { int[] dp = new int[dicePool]; for (Integer a : dp) { // call throwDice method } } Use a traditional for loop. public int calcHits(int dicePool) { for (int i = 0; i < dicePool; i++) { // call throwDice method } } I apologise for the poor presentation of the code above (for some reason the code button on the Ask Question page is not doing what it should). My view is that option 1 is clumsy code and involves unnecessary creation of an array, even though the for-each loop is more efficient than the traditional for loop in Option 2. Thanks in advance for any suggestions you might have.

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • As a newbie, where should I go if I want to create a small GUI program?

    - by jimbmk
    Hello, I'm a newbie with a little experience writing in BASIC, Python and, of all things, a smidgeon of assembler (as part of a videogame ROM hack). I wanted to create small tool for modifying the hex values at particular points, in a particular file, that would have a GUI interface. What I'm looking for is the ability to create small GUI program, that I can distribute as an EXE (or, at least a standalone directory). I'm not keen on the idea of the .NET languages, because I don't want to force people to download a massive .NET framework package. I currently have Python with IDLE and Boa Constructor set up, and the application runs there. I've tried looking up information on compiling a python app that relies on Wxwidgets, but the search results and the information I've found has been confusing, or just completely incomprehensible. My questions are: Is python a good language to use for this sort of project? If I use Py2Exe, will WxWidgets already be included? Or will my users have to somehow install WxWidgets on their machines? Am I right in thinking at Py2Exe just produces a standalone directory, 'dist', that has the necessary files for the user to just double click and run the application? If the program just relies upon Tkinter for GUI stuff, will that be included in the EXE Py2Exe produces? If so, are their any 'visual' GUI builders / IDEs for Python with only Tkinter? Thankyou for your time, JBMK

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >