Search Results

Search found 19913 results on 797 pages for 'bit packing'.

Page 455/797 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • Trouble parsing NMEA data from Serial Port.

    - by rross
    I'm retrieving NMEA sentences from a serial GPS. Then string are coming across like I would expect. The problem is that when parsing a sentence like this: $GPRMC,040302.663,A,3939.7,N,10506.6,W,0.27,358.86,200804,,*1A I use a simple bit of code to make sure I have the right sentect: string[] Words = sBuffer.Split(','); foreach (string item in Words) { if (item == "$GPRMC") { return "Correct Sentence"; } else { return "Incorrect Sentence } } I added the return in that location for the example. I have printed the split results to a text box and have seen that $GPRMC is indeed coming across in the item variable at some point. If the string is coming across why won't the if statement catch? Is is the $? How can I trouble shoot this?

    Read the article

  • django forms doubt

    - by webvulture
    Here, I am a bit confused with forms in Django. I have information for the form(a poll i.e the poll question and options) coming from some db_table - table1 or say class1 in models. Now the vote from this poll is to be captured which is another model say class2. So, I am just getting confused with the whole flow of forms, here i think. How will the data be captured into the class2 table? I was trying something like this. def blah1()     get_data_from_db_table_1()     x = blah2Form()     render_to_response(blah.html,{...})

    Read the article

  • Embedding PDF documents into websites

    - by papermate
    I need to embed some PDF documents into a website. The last time I did this, I used a jQuery lightbox to popup an iFrame with the PDF document as the URL. The client's PDF viewer would then take care of the rest. Apparently though, that was a bit buggy on some other peoples browsers. I guess it was due to the large PDF file sizes and the effort it took for their computers to fire up Adobe. So I'm after ideas on how to go about this. How do you guys embed your PDF's into websites? Or do you just stick to adding a download link?

    Read the article

  • Diff between $.ajaxSetup and $.ajax in jquery

    - by Deeptechtons
    title is a bit misleading but i would like to know internally (what happens during ajax request) when i execute Code 1 and Code 2 in turns Code 1 $.ajax({url:"1.aspx/HelloWorld",type:"GET",dataType:"json",contentType:"application/json"}); Code 2 $.ajaxSetup({ contentType: "application/json", dataType: "json" }); $.get("1.aspx/HelloWorld","",$.noop,"json"); i ask this because the method HelloWorld in page 1.aspx is executed correctly when run Code 1. But the seconds one refuses to invoke the pageMethod. I have set the ContentType and data as expected but the second request in Code 2 refuses to invoke the method does anyone have a reason for this ?

    Read the article

  • How Does DotNetNuke Stack Up For SEO? E-Commerce?

    - by user326502
    I've heard that DotNetNuke takes a bit of a hit for Search Engine Optimization. I'm not criticizing the platform, by the way; I love DNN. This is just what I've heard. As I understand it, the impact is from repetitive content, table-based layouts, and lots of extra markup. I've got a friend who would like to start an e-commerce site using DNN and some modules from Snowcovered. I was just wondering whether DNN would be a good platform to choose. The idea is attractive because of the ease with which a DNN commerce system can be deployed. Search-engine friendly URLs aren't the problem - the modules do that, it's whether DNN as a whole would be a good platform for this. Thanks very much for any help or advice.

    Read the article

  • Best Practise to populate Fact and Dimension Tables from Transactional Flat DB

    - by alex25
    Hi! I want to populate a star schema / cube in SSIS / SSAS. I prepared all my dimension tables and my fact table, primary keys etc. The source is a 'flat' (item level) table and my problem is now how to split it up and get it from one into the respective tables. I did a fair bit of googling but couldn't find a satisfying solution to the problem. One would imagine that this is a rather common problem/situation in BI development?! Thanks, alexl

    Read the article

  • Write a MAT file without using matlab headers and libraries.

    - by YuppieNetworking
    Hello all, I have some data that I would like to save to a MAT file (version 4 or 5, or any version, for that matter). The catch: I wanted to do this without using matlab libraries, since this code will not necessary run in a machine with matlab. My program uses Java and C++, so any existing library in those languages that achieves this could help me out... I did some research but did not find anything in Java/C++. However, I found that scipy on python achieves this with mio4.py or mio5.py. I thought about implementing this on java or C++, but it seems a bit out of my time schedule. So the question is: is there any libraries in Java or C/C++ that permits saving MAT files without using Matlab libraries? Thanks a lot

    Read the article

  • TelerikProfileProvider with custom Membership Provider

    - by Larsenal
    I've setup two membership providers: my custom provider and the Sitefinity provider. My custom membership provider is set as the default. I want to use Sitefinity's Profile provider for both sets of users. However, the profile provider only seems to work for the users that I pull out of the Sitefinity membership provider. After poking around with Reflector a bit, it seems that the Telerik Profile Provider assumes that the username exists in its own DB. User userByName = this.Application.GetUserByName(userName); if (userByName != null) { // magic happens here... } All the magic only happens if it was able to retrieve the user locally. Seems to violate the principles of the providers. Shouldn't I be able to arbitrarily add properties to any user regardless of the membership provider? (I've also posted this on the Sitefinity forum, but haven't got a response yet. SO has spoiled me. I've come to expect an answer in minutes, not days.)

    Read the article

  • science.js’s loess() output is identical to input

    - by user3710111
    Rendered project available here. The line is supposed to be a trend line (as rendered with LOESS), but it merely follows each data point instead. I am no stats wonk, so maybe it makes sense that a LOESS function’s output would match the input as seen in the above example, but it strikes me as being wrong. Here is the relevant bit of code: var loess = science.stats.loess().bandwidth(.2); var xVal = data.map(function(d) { return d.date; }); var yVal = data.map(function(d) { return d.A; }); var loessData = loess([xVal], [yVal])[0]; console.log(yVal); console.log(loessData);

    Read the article

  • Converting Oracle date arithmetic to work in HSQLDB

    - by JBristow
    I'm trying to spot-test an Oracle backed database with hsqldb and dbunit, but I've run into a snag. The problem is with the following EJB-QL (simplified a bit): SELECT o FROM Offer o WHERE :nowTime BETWEEN o.startDate AND o.startDate + 7 This seems to only work in Oracle's version of SQL. What's the easiest way for me to convert this to work in both hsqldb and oracle? Assume that changing the two between arguments to named parameters is a very difficult refactor, so I'm going to favor answers that provides a more standardized analog to o.startdate + 7 EDIT: After doing some more research, it looks like Oracle converts the above snippet to o.startdate + INTERVAL '7' DAY which is apparently more standard, but doesn't work in HSQLDB.

    Read the article

  • variable in variable in batch and delayed expansion

    - by rezna
    Hi, I'm trying to use variable in variable in conjunction with delayed expansion but still no luck. SETLOCAL EnableDelayedExpansion SET ERROR_COMMAND=exit /B ^!ERRORLEVEL^! This is my last try. I want to setup an ERROR_COMMAND to be called when one of the steps in batch file crashes. The command is supposed to be: IF ERRORLEVEL 1 !ERROR_COMMAND! or IF ERRORLEVEL 1 %ERROR_COMMAND% The thing is, I'm not able to find out, how to SET properly the ERROR_COMMAND variable, so that ERRORLEVEL is not evaluated at the time of assignment, but at the time of evaluating the variable Of course I can copy&paste the code all over the batch file, but using the variable just seems a bit prettier... Anyone? Thanks, Milan

    Read the article

  • Having trouble with Regular Expression and Ampersand

    - by ajax81
    Hi All, I'm having a bit of trouble with regex's (C#, ASP.NET), and I'm pretty sure I'm doing something fundamentally wrong. My task is to bind a dynamically created gridview to a datasource, and then iterate through a column in the grid, looking for the string "A&I". An example of what the data in the cell (in template column) looks like is: Name: John Doe Phone: 555-123-1234 Email: [email protected] Dept: DHS-A&I-MRB Here's the code I'm using to find the string value: foreach(GridViewRow gvrow in gv.Rows) { Match m = Regex.Match(gvrow.Cells[6].Text,"A&I"); if(m.Success) { gvrow.ForeColor = System.Drawing.Color.Red; } } I'm not having any luck with any of these variations: "A&I" "[A][&][I]" But when I strictly user "&", the row does turn red. Any suggestions? Thanks, Dan

    Read the article

  • Oracle .NET Provider DLL hell

    - by Pablo Santa Cruz
    I am currently developing on a Win7-32bits computer. Everything works fine. It's a ASP.NET application. I was able to use Microsoft's Oracle deprecated .NET provider to connect to Oracle (using 32 bit instant client) and also ODP.NET. No problems at all. Application runs fine. The problem comes when I deploy it to IIS7 on Windows 2008 Server 64bit computer. I can't get Microsoft's deprecated .NET provider or ODP.NET to work easily. Is there a straightforward way to use a 32bit based ODP.NET or Microsoft's Oracle deprecated .NET provider in Windows 2008 Server 64bits? DLL hell here! Thanks.

    Read the article

  • What is the best way to set up a PHP development environment on a Mac?

    - by Edward Tanguay
    When I do PHP, I use XAMPP to set up a development environment on Windows, then upload to Linux servers, works very well. I'm now passing on a PHP project to a person who has a Mac so he needs a local PHP dev environment. I noticed XAMPP has a version for Mac which I will recommend. But knowing that Mac is always a bit different, has anyone used any other easy PHP environment setup tool for the Mac, or I could even imagine that Mac solves this issue more elegantly by e.g. having a web server ready to go upon first boot etc. What is the best way to set up a PHP development environment on a Mac?

    Read the article

  • What does msvc 6 throw when an integer divide by zero occurs?

    - by EvilTeach
    I have been doing a bit of experimenting, and have discovered that an exception is being thrown, when an integer divide by zero occurs. #include <iostream> #include <stdexcept> using namespace std; int main ( void ) { try { int x = 3; int y = 0; int z = x / y; cout << "Didn't throw or signal" << endl; } catch (std::exception &e) { cout << "Caught exception " << e.what() << endl; } return 0; } Clearly it is not throwing a std::exception. What else might it be throwing?

    Read the article

  • IE7 float/clear bug

    - by alex
    I am having the weirdest behavior in IE7 on this page. I have a clear: left on #moored that makes sure that #content-container will expand to the height of the float: left'd #content. Of course, Firefox and IE8 do it right. The weird thing though, is in IE7, it appears that #content-container is automatically expanding to contain the #secondary-content, which it shouldn't. Even weirder is when I view with IE8 with developer tools, the boundary boxes seem to show the that the #content-container is the expanding to the height of the #secondary-container, even though the background: #fff tells a different story. The background goes to where it should, and the void is now black. I've Googled a bit and came across heaps of similar float bugs, but I don't think I've found one which offers a fix yet for this circumstance. Can anyone please help me?

    Read the article

  • What is the easiest way to get an embedded upload progress bar using Ruby/Sinatra/Haml/Passenger/ngi

    - by mmr
    I have a website where people can upload 30+mb of data in a single block, and I want to be able to show them the progress of their upload without causing the web page to become unresponsive, similar to how flash uploads work in gmail. There's this question here, but I don't know if that progress bar is embedded in the page or if it's using the browser's progress bar. I'm also a bit of a web newb, so I'm not sure if it's the 'easiest'. I asked the swfupload guys how to do this here, and the answer I got is 'this tool requires some knowledge to use it' without giving me much help in figuring out where to get started. I also asked this question on ServerFault, and got no response, so maybe that was the wrong place to ask. I'm all for learning new things and so forth, but there are a lot of potential pathways to take here. Where should I start, and what do I need to know to make everything work with sinatra, haml, ruby, passenger, and nginx? Thanks!

    Read the article

  • RSS parsing last build Date. Fastest way to do so please.

    - by Paul
    Dim myRequest As System.Net.WebRequest = System.Net.WebRequest.Create(url) Dim myResponse As System.Net.WebResponse = myRequest.GetResponse() Dim rssStream As System.IO.Stream = myResponse.GetResponseStream() Dim rssDoc As New System.Xml.XmlDocument() Try rssDoc.Load(rssStream) Catch nosupport As NotSupportedException Throw nosupport End Try Dim rssItems As System.Xml.XmlNodeList = rssDoc.SelectNodes("rss/channel") 'For i As Integer = 0 To rssItems.Count - 1 Dim rssDetail As System.Xml.XmlNode rssDetail = rssItems.Item(0).SelectSingleNode("lastBuildDate") Folks this is what I'm using to parse an RSS feed for the last updated time. Is there a quicker way? Speed seems to be a bit slow on it as it pulls down the entire feed before parsing.

    Read the article

  • Parse values from a text file in C

    - by Mohit Deshpande
    Say I have written to a text file in this format: key1/value1 key2/value2 akey/withavalue anotherkey/withanothervalue I have a linked list like: struct Node { char *key; char *value; struct Node *next; }; to hold the values. How would I read key1 and value1? I was thinking of putting line by line in a buffer and using strtok(buffer, '/'). Would that work? What other ways could work, maybe a bit faster or less prone to error? Please include a code sample if you can!

    Read the article

  • What trivial real-life example do you use to explain programming to total non-programmers?

    - by anon
    Programmers seem to live in a world of their own (as this site indicates), with their own vibrant culture - and their own premises and vocabulary. Once we've been in the field for a bit, we take a lot of things for granted. But I'm often faced with the question "What do you do?" Or "What is programming?" I generally try to answer this with a small, often trivial, real-world example of how programming is prevalent in our everyday lives and keeps things running. The example I use most often is an elevator - someone has to program the logic of that... And I've seen elevators that are "smart" and ones that are quite backwards and foolish. (And you can easily understand if/decision and looping from that... incorporates a lot of important programming concepts in a very small example.) I've sometimes heard people use traffic lights as an example. What example do you / would you use to explain the concept of programming to someone completely clueless?

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • How to configure SQLite db in Visual Studio

    - by ChrisC
    I've messed with Access a little bit in the past, had one class on OO theory, and one class on console c++ apps. Now, as a hobby project, I'm undertaking to write an actual app, which will be a database app using System.Data.SQLite and C#. I have the db's table structure planned. I have System.Data.SQLite installed and connected to VS Pro. I entered my tables and columns in VS, but that's where I'm stuck. I really don't know how to finish the db set up so I can start creating queries and testing the db structure. Can someone give me guidance to online resources that will help me learn how to get the db properly set up so I can proceed with testing it? I'm hoping for online resources specific to beginners using C# and System.Data.SQLite, but I'll use the closest I can get. Thanks.

    Read the article

  • Specify which row to return on SQLite Group By

    - by lozzar
    I'm faced with a bit of a difficult problem. I store all the versions of all documents in a single table. Each document has a unique id, and the version is stored as an integer which is incremented everytime there is a new version. I need a query that will only select the latest version of each document from the database. While using GROUP BY works, it appears that it will break if the versions are not inserted in the database in the order of version (ie. it takes the maximum ROWID which will not always be the latest version). Note, that the latest version of each document will most likely be a different number (ie. document A is at version 3, and document B is at version 6). I'm at my wits end, does anybody know how to do this (select all the documents, but only return a single record for each document_id, and that the record returned should have the highest version number)?

    Read the article

  • Django Caught an exception while rendering: No module named registration

    - by Arno Smit
    I seem to have run into a bit of an issue. I am busy creating an app, and over the last few weeks setup my server to use Git, mod_wsgi to host this app. Since deploying it, everything seems to be running smoothly however, I had to go through all my files and insert the absolute url of the project to make sure it works fine. on my local machine from registration.models import UserRegistration on server from myapp.registration.models import UserRegistration Am I doing something wrong? And this has also caused an issue for me where I cannot access my django admin interface. All i get is this: Caught an exception while rendering: No module named registration Exception Value: Caught an exception while rendering: No module named registration As far as I am concerned my app has all the relevant urls, but it does not seem to work. Thank you in advance

    Read the article

  • Student's t distribution in JavaScript

    - by Sai Emrys
    Google Spreadsheets currently does not support the standard function TDIST - i.e. the Student's t-distribution. This function is critical for calculating p-values. It seems that this is related to the fact that no integral-using functions (AFAICT) are implemented either. However, Google Docs allows people to add and publish their own scripts, in JavaScript. So ideally we should have something like: function tdist(t_value, degrees_of_freedom, two_tailed [defaults true]) {...} Anyone know of either an extant implementation of this (my google-fu has not turned up one, but may be weaker than yours) or a good idea for how to do it? I'd like to publish this together with some other useful functions that are currently calculable but a bit of a pain (like Student's t-test itself). Thanks!

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >